Compare commits

..

133 Commits

Author SHA1 Message Date
henryruhs
098f64916f API experiment part2 + ugly frontend 2024-12-09 14:29:01 +01:00
henryruhs
382a036f66 API experiment part1 2024-12-08 01:53:01 +01:00
henryruhs
63399c04d0 Change order in choices and typing 2024-12-07 00:58:49 +01:00
Henry Ruhs
0c0062db72
Feat/ffmpeg with progress (#827)
* FFmpeg with progress bar

* Fix typing

* FFmpeg with progress bar part2

* Restore streaming wording
2024-12-06 11:33:37 +01:00
henryruhs
70f0c708e1 Use tolerant for video_memory_strategy in benchmark 2024-12-05 14:13:43 +01:00
henryruhs
03acb4ac8a Update dependencies 2024-12-04 18:43:52 +01:00
henryruhs
a20dfd3b28 Fix wording 2024-12-04 00:14:41 +01:00
henryruhs
ca52e2ea24 Fix wording 2024-12-03 23:40:24 +01:00
henryruhs
9cee75542a Rename create_log_level_program to create_misc_program 2024-12-03 22:50:30 +01:00
henryruhs
bc5934950d Update README 2024-12-03 22:14:01 +01:00
henryruhs
3ec187425f Fix space 2024-12-03 22:14:01 +01:00
henryruhs
d362796bee Change source-paths behaviour 2024-12-01 21:36:24 +01:00
Henry Ruhs
33db393af9
Introduce download scopes (#826)
* Introduce download scopes

* Limit download scopes to force-download command
2024-11-26 18:21:11 +01:00
henryruhs
acdddc730e Add deep swapper models by Edel 2024-11-25 23:42:11 +01:00
henryruhs
12a01cab8f Remove duplicates (Rumateus is the creator) 2024-11-25 22:21:31 +01:00
henryruhs
2adb716d15 Fix CoreML partially 2024-11-25 20:26:57 +01:00
henryruhs
d73d75378f Change TempFrameFormat order 2024-11-24 22:39:10 +01:00
henryruhs
c93de4ecd2 Remove as this does not work 2024-11-24 00:16:58 +01:00
henryruhs
216d1f05cd Kill resolve_execution_provider_keys() and move fallbacks where they belong 2024-11-23 22:59:03 +01:00
henryruhs
bae8c65cf0 Kill resolve_execution_provider_keys() and move fallbacks where they belong 2024-11-23 22:53:26 +01:00
henryruhs
2f98ac8471 Fix space 2024-11-23 21:21:42 +01:00
henryruhs
1348575d05 Fix resolve_download_url 2024-11-23 18:03:45 +01:00
henryruhs
44b4c926da Use resolve_download_url() everywhere, Vanish --skip-download flag 2024-11-23 17:56:08 +01:00
henryruhs
efd4071a3e Switch to latest XSeg 2024-11-22 17:36:10 +01:00
henryruhs
a96ea0fae4 Switch to latest XSeg 2024-11-22 17:27:38 +01:00
henryruhs
f3523d91c4 Switch to latest XSeg 2024-11-22 17:23:35 +01:00
henryruhs
003fa61fcd Undo restore_audio() 2024-11-22 15:13:59 +01:00
henryruhs
4fb1d0b1f3 Fix model key 2024-11-22 09:55:35 +01:00
Henry Ruhs
b4f1a0e083
Move clear over to the UI (#825) 2024-11-21 11:02:26 +01:00
Henry Ruhs
48440407e2
Introduce create_static_model_set() everywhere (#824) 2024-11-20 21:05:18 +01:00
henryruhs
ab34dbb991 Add deep swapper models by Jen 2024-11-20 14:23:47 +01:00
henryruhs
ba874607d7 Use static model set creation 2024-11-19 22:46:38 +01:00
henryruhs
01420056b4 Fix face enhancer blend in UI 2024-11-19 18:28:03 +01:00
Henry Ruhs
ffac0783d9
Implement face enhancer weight for codeformer, Side Quest: has proces… (#823)
* Implement face enhancer weight for codeformer, Side Quest: has processor checks

* Fix typo
2024-11-19 15:08:11 +01:00
henryruhs
c3f58b2d0f Add deep swapper models by Rumateus 2024-11-18 23:38:55 +01:00
henryruhs
c18d7a4d4f Add deep swapper models by Druuzil 2024-11-18 23:19:37 +01:00
henryruhs
b6e895fcf6 Add deep swapper models by Mats 2024-11-18 22:26:46 +01:00
harisreedhar
3cf06de27f remove dfl_head and update dfl_whole_face template 2024-11-18 19:43:35 +05:30
Henry Ruhs
48016eaba3
Show/hide morph slider for deep swapper (#822) 2024-11-18 10:59:40 +01:00
Christian Clauss
aada3ff618
ci.yml: Add macOS on ARM64 to the testing (#818)
* ci.yml: Add macOS on ARM64 to the testing

* ci.yml: uses: AnimMouse/setup-ffmpeg@v1

* ci.yml: strategy: matrix: os: macos-latest,

* - name: Set up FFmpeg

* Update .github/workflows/ci.yml

* Update ci.yml

---------

Co-authored-by: Henry Ruhs <info@henryruhs.com>
2024-11-17 09:54:47 +01:00
henryruhs
fb15d0031e Introduce model helper 2024-11-16 22:52:26 +01:00
henryruhs
a043703a3d Fix first black screen 2024-11-16 22:41:04 +01:00
henryruhs
501b522914 Simplify thumbnail-item looks 2024-11-16 15:20:52 +01:00
henryruhs
96a34ce9ff Kill accent colors, Number input styles for Chrome 2024-11-16 15:13:19 +01:00
henryruhs
5188431d23 Fix deep swapper sizes 2024-11-15 22:44:55 +01:00
Henry Ruhs
db64c529d0
Add more deepfacelive models (#817)
* Add more deepfacelive models

* Add more deepfacelive models
2024-11-15 22:02:52 +01:00
henryruhs
ba71e96302 Fix preview refresh after slide 2024-11-15 21:39:01 +01:00
Harisreedhar
5a0c2cad96
DFM Morph (#816)
* changes

* Improve wording, Replace [None], SideQuest: clean forward() of age modifier

* SideQuest: clean forward() of face enhancer

---------

Co-authored-by: henryruhs <info@henryruhs.com>
2024-11-15 18:26:08 +01:00
Harisreedhar
28f7dba897
Merge pull request #815 from facefusion/changes/dfl-template-approach
change dfl to template alignment
2024-11-15 21:11:04 +05:30
harisreedhar
24571b47e2 fix warp_face_by_bounding_box dtype error 2024-11-15 20:50:42 +05:30
harisreedhar
1355549acf changes 2024-11-15 20:24:05 +05:30
Henry Ruhs
9d0c377aa0
Feat/simplify hashes sources download (#814)
* Extract download directory path from assets path

* Fix lint

* Fix force-download command, Fix urls in frame enhancer
2024-11-14 22:34:23 +01:00
henryruhs
74c61108dd Use different morph value 2024-11-14 20:39:24 +01:00
henryruhs
7605b5451b Add more deepfacelive models 2024-11-14 15:06:03 +01:00
henryruhs
ca035068fe Make deep swapper inputs universal 2024-11-14 13:11:24 +01:00
henryruhs
95bcf67a75 Rename bulk-run to batch-run 2024-11-14 10:45:14 +01:00
Henry Ruhs
4cb1fe276e
Improve resolve download 2024-11-14 01:45:07 +01:00
Henry Ruhs
b7af0c1d9b
Fix name 2024-11-14 01:10:16 +01:00
henryruhs
e5cfc5367e Rename template key to deepfacelive 2024-11-14 00:24:20 +01:00
henryruhs
50837a6ba5 Improve NVIDIA device lookups 2024-11-13 17:34:51 +01:00
Harisreedhar
c7e7751b81
Merge pull request #812 from facefusion/improvement/deep-swapper-alignment
Improvement/deep swapper alignment
2024-11-13 19:50:30 +05:30
harisreedhar
60498b4e9a changes 2024-11-13 19:23:43 +05:30
Henry Ruhs
bdae74a792
Update Python to 3.12 for CI (#813) 2024-11-13 13:41:59 +01:00
harisreedhar
37239f06f6 changes 2024-11-13 16:48:12 +05:30
harisreedhar
0b6dd6c8b1 changes 2024-11-13 16:07:14 +05:30
henryruhs
a6929d6cb4 Allow bulk runner with target pattern only 2024-11-12 19:06:33 +01:00
Henry Ruhs
244df12ff8
Add safer global named resolve_file_pattern() (#811) 2024-11-12 12:00:45 +01:00
harisreedhar
18244da99f new alignment 2024-11-12 15:30:25 +05:30
Henry Ruhs
8385e199f4
Introduce bulk-run (#810)
* Introduce bulk-run

* Make bulk run bullet proof

* Integration test for bulk-run
2024-11-12 10:41:25 +01:00
henryruhs
f53f959510 Fix model paths for 3.1.0 2024-11-12 01:51:55 +01:00
Henry Ruhs
931e4a1418 Feat/download providers (#809)
* Introduce download providers

* update processors download method

* add ui

* Fix CI

* Adjust UI component order, Use download resolver for benchmark

* Remove is_download_done()

* Introduce download provider set, Remove choices method from execution, cast all dict keys() via list()

* Fix spacing

---------

Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
2024-11-11 22:07:51 +01:00
harisreedhar
7ddd7f47d6 changes 2024-11-11 22:07:51 +01:00
harisreedhar
ca6bc65abf changes 2024-11-11 22:07:51 +01:00
harisreedhar
48f73238df changes 2024-11-11 22:07:51 +01:00
harisreedhar
ca9ccbfd35 changes 2024-11-11 22:07:51 +01:00
henryruhs
feaaf6c028 Remove vendor from model name 2024-11-11 22:07:51 +01:00
henryruhs
071b568313 Remove vendor from model name 2024-11-11 22:07:51 +01:00
henryruhs
81b4e90261 Release five frame enhancer models 2024-11-11 22:07:51 +01:00
henryruhs
15e2a98d6d Add namespace for dfm creators 2024-11-11 22:07:51 +01:00
harisreedhar
e88cc65b33 changes 2024-11-11 22:07:51 +01:00
harisreedhar
cae4f0f33a changes 2024-11-11 22:07:51 +01:00
harisreedhar
a4161ccade changes 2024-11-11 22:07:51 +01:00
harisreedhar
eb409a99d1 add to facefusion.ini 2024-11-11 22:07:51 +01:00
harisreedhar
119a8bcadc changes 2024-11-11 22:07:51 +01:00
harisreedhar
82e2677649 remove model size requirement 2024-11-11 22:07:51 +01:00
harisreedhar
9f0a6b747f changes 2024-11-11 22:07:51 +01:00
harisreedhar
447ca53d54 adaptive color correction 2024-11-11 22:07:51 +01:00
harisreedhar
95a63ea7a2 add both mask instead of multiply 2024-11-11 22:07:51 +01:00
harisreedhar
518e00ff22 changes 2024-11-11 22:07:51 +01:00
henryruhs
00c7c6a6ba Remove cudnn_conv_algo_search tweaks 2024-11-11 22:07:51 +01:00
henryruhs
696b54099e Remove cudnn_conv_algo_search tweaks 2024-11-11 22:07:51 +01:00
henryruhs
b05e25cf36 Update onnxruntime (second try) 2024-11-11 22:07:51 +01:00
henryruhs
f89398d686 Update onnxruntime (second try) 2024-11-11 22:07:51 +01:00
Henry Ruhs
4bffa0d183 Fix/enforce vp9 for webm (#805)
* Simple fix to enforce vp9 for webm

* Remove suggest methods from program helper

* Cleanup ffmpeg.py a bit
2024-11-11 22:07:51 +01:00
henryruhs
965da98745 Revert due terrible performance 2024-11-11 22:07:51 +01:00
henryruhs
885d5472ce Adjust color for checkboxes 2024-11-11 22:07:51 +01:00
henryruhs
af54dc1c76 Update dependencies 2024-11-11 22:07:51 +01:00
henryruhs
871f15fe20 Update onnxruntime 2024-11-11 22:07:51 +01:00
Henry Ruhs
d149a71a1b Feat/temp path second try (#802)
* Terminate base directory from temp helper

* Partial adjust program codebase

* Move arguments around

* Make `-j` absolete

* Resolve args

* Fix job register keys

* Adjust date test

* Finalize temp path
2024-11-11 22:07:51 +01:00
henryruhs
d650d8fa86 Update dependencies 2024-11-11 22:07:51 +01:00
henryruhs
9b46be5034 Gradio pinned python-multipart to 0.0.12 2024-11-11 22:07:51 +01:00
henryruhs
c94b617827 Add __pycache__ to gitignore 2024-11-11 22:07:51 +01:00
henryruhs
0eb2833c02 Switch to official assets repo 2024-11-11 22:07:51 +01:00
Harisreedhar
5c9d893dab changes (#801) 2024-11-11 22:07:51 +01:00
henryruhs
f75410e1e1 Minor cleanup 2024-11-11 22:07:51 +01:00
henryruhs
caa8347ff0 Minor cleanup 2024-11-11 22:07:51 +01:00
henryruhs
2bc78aebe1 Minor cleanup 2024-11-11 22:07:51 +01:00
Harisreedhar
04bbb89756 Improved color matching (#800)
* aura fix

* fix import

* move to vision.py

* changes

* changes

* changes

* changes

* further reduction

* add test

* better test

* change name
2024-11-11 22:07:51 +01:00
henryruhs
efb7cf41ee Adjust naming 2024-11-11 22:07:51 +01:00
henryruhs
68c9c5697d Make slider inputs and reset like a unit 2024-11-11 22:07:51 +01:00
henryruhs
0aef1a99e6 Make slider inputs and reset like a unit 2024-11-11 22:07:51 +01:00
Henry Ruhs
d0bab20755 Feat/update gradio5 (#799)
* Update to Gradio 5

* Remove overrides for Gradio

* Fix dark mode for Gradio

* Polish errors

* More styles for tabs and co
2024-11-11 22:07:51 +01:00
Henry Ruhs
cd85a454f2 Fix/age modifier styleganex 512 (#798)
* fix

* styleganex template

* changes

* changes

* fix occlusion mask

* add age modifier scale

* change

* change

* hardcode

* Cleanup

* Use model_sizes and model_templates variables

* No need for prepare when just 2 lines of code

* Someone used spaces over tabs

* Revert back [0][0]

---------

Co-authored-by: harisreedhar <h4harisreedhar.s.s@gmail.com>
2024-11-11 22:07:51 +01:00
henryruhs
20d2b6a4ea Fix OpenVINO by aliasing GPU.0 to GPU 2024-11-11 22:07:51 +01:00
henryruhs
853474bf79 Fix OpenVINO by aliasing GPU.0 to GPU 2024-11-11 22:07:51 +01:00
henryruhs
a3c228f1b1 Fix state of face selector 2024-11-11 22:07:51 +01:00
henryruhs
f6441c2142 Need for Python 3.10 2024-11-11 22:07:51 +01:00
henryruhs
44e418fcb9 Do hard exit on invalid args 2024-11-11 22:07:51 +01:00
henryruhs
0200a23276 Prevent duplicate entries to local PATH 2024-11-11 22:07:51 +01:00
henryruhs
25fefdedcb Remove shortest and use fixed video duration 2024-11-11 22:07:51 +01:00
henryruhs
94963ee47d Remove shortest and use fixed video duration 2024-11-11 22:07:51 +01:00
henryruhs
2aa9b04874 Fix replace_audio() 2024-11-11 22:07:51 +01:00
henryruhs
9f2c3cb180 Testing for restore audio 2024-11-11 22:07:51 +01:00
henryruhs
784cb6c330 Testing for restore audio 2024-11-11 22:07:51 +01:00
henryruhs
56f7bbcf7f Testing for replace audio 2024-11-11 22:07:51 +01:00
henryruhs
39478f7d63 Cosmetics on ignore comments 2024-11-11 22:07:51 +01:00
Henry Ruhs
06740aeea0 Webcam polishing part1 (#796) 2024-11-11 22:07:51 +01:00
henryruhs
8ef133ace9 Disable stream for expression restorer 2024-11-11 22:07:51 +01:00
henryruhs
0e4f69ce56 Introduce hififace swapper 2024-11-11 22:07:51 +01:00
henryruhs
ce3dac7718 Fix return type 2024-11-11 22:07:51 +01:00
henryruhs
ad82ee8468 Fix spaces and newlines 2024-11-11 22:07:51 +01:00
DDXDB
55e7535ed5 add H264_qsv&HEVC_qsv (#768)
* Update ffmpeg.py

* Update choices.py

* Update typing.py
2024-11-11 22:07:51 +01:00
henryruhs
432ae587dc Replace audio whenever set via source 2024-11-11 22:07:48 +01:00
56 changed files with 1090 additions and 782 deletions

BIN
.github/preview.png vendored

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

@ -32,8 +32,6 @@ reference_face_distance =
reference_frame_number = reference_frame_number =
[face_masker] [face_masker]
face_occluder_model =
face_parser_model =
face_mask_types = face_mask_types =
face_mask_blur = face_mask_blur =
face_mask_padding = face_mask_padding =

72
facefusion/_preview.py Normal file
View File

@ -0,0 +1,72 @@
from typing import Optional
import cv2
import numpy
from facefusion import core, state_manager
from facefusion.audio import create_empty_audio_frame, get_audio_frame
from facefusion.common_helper import get_first
from facefusion.content_analyser import analyse_frame
from facefusion.face_analyser import get_average_face, get_many_faces
from facefusion.face_selector import sort_faces_by_order
from facefusion.face_store import get_reference_faces
from facefusion.filesystem import filter_audio_paths, is_image, is_video
from facefusion.processors.core import get_processors_modules
from facefusion.typing import AudioFrame, Face, FaceSet, VisionFrame
from facefusion.vision import get_video_frame, read_static_image, read_static_images, resize_frame_resolution
def process_frame(frame_number : int = 0) -> Optional[VisionFrame]:
core.conditional_append_reference_faces()
reference_faces = get_reference_faces() if 'reference' in state_manager.get_item('face_selector_mode') else None
source_frames = read_static_images(state_manager.get_item('source_paths'))
source_faces = []
for source_frame in source_frames:
temp_faces = get_many_faces([ source_frame ])
temp_faces = sort_faces_by_order(temp_faces, 'large-small')
if temp_faces:
source_faces.append(get_first(temp_faces))
source_face = get_average_face(source_faces)
source_audio_path = get_first(filter_audio_paths(state_manager.get_item('source_paths')))
source_audio_frame = create_empty_audio_frame()
if source_audio_path and state_manager.get_item('output_video_fps') and state_manager.get_item('reference_frame_number'):
reference_audio_frame_number = state_manager.get_item('reference_frame_number')
if state_manager.get_item('trim_frame_start'):
reference_audio_frame_number -= state_manager.get_item('trim_frame_start')
temp_audio_frame = get_audio_frame(source_audio_path, state_manager.get_item('output_video_fps'), reference_audio_frame_number)
if numpy.any(temp_audio_frame):
source_audio_frame = temp_audio_frame
if is_image(state_manager.get_item('target_path')):
target_vision_frame = read_static_image(state_manager.get_item('target_path'))
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, target_vision_frame)
return preview_vision_frame
if is_video(state_manager.get_item('target_path')):
temp_vision_frame = get_video_frame(state_manager.get_item('target_path'), frame_number)
preview_vision_frame = process_preview_frame(reference_faces, source_face, source_audio_frame, temp_vision_frame)
return preview_vision_frame
return None
def process_preview_frame(reference_faces : FaceSet, source_face : Face, source_audio_frame : AudioFrame, target_vision_frame : VisionFrame) -> VisionFrame:
target_vision_frame = resize_frame_resolution(target_vision_frame, (1024, 1024))
source_vision_frame = target_vision_frame.copy()
if analyse_frame(target_vision_frame):
return cv2.GaussianBlur(target_vision_frame, (99, 99), 0)
for processor_module in get_processors_modules(state_manager.get_item('processors')):
if processor_module.pre_process('preview'):
target_vision_frame = processor_module.process_frame(
{
'reference_faces': reference_faces,
'source_face': source_face,
'source_audio_frame': source_audio_frame,
'source_vision_frame': source_vision_frame,
'target_vision_frame': target_vision_frame
})
return target_vision_frame

121
facefusion/api.py Normal file
View File

@ -0,0 +1,121 @@
import asyncio
import json
from typing import Any, List
import cv2
import uvicorn
from litestar import Litestar, WebSocket, get as read, websocket as stream, websocket_listener
from litestar.static_files import create_static_files_router
from facefusion import _preview, choices, execution, state_manager, vision
from facefusion.processors import choices as processors_choices
from facefusion.state_manager import get_state
from facefusion.typing import ExecutionDevice
@read('/choices')
async def read_choices() -> Any:
__choices__ = {}
for key in dir(choices):
if not key.startswith('__'):
value = getattr(choices, key)
if isinstance(value, (dict, list)):
__choices__[key] = value
return __choices__
@read('/processors/choices')
async def read_processors_choices() -> Any:
__processors_choices__ = {}
for key in dir(processors_choices):
if not key.startswith('__'):
value = getattr(processors_choices, key)
if isinstance(value, (dict, list)):
__processors_choices__[key] = value
return __processors_choices__
@read('/execution/providers')
async def read_execution_providers() -> Any:
return execution.get_execution_provider_set()
@stream('/execution/devices')
async def stream_execution_devices(socket : WebSocket[Any, Any, Any]) -> None:
await socket.accept()
while True:
await socket.send_json(execution.detect_execution_devices())
await asyncio.sleep(0.5)
@read('/execution/devices')
async def read_execution_devices() -> List[ExecutionDevice]:
return execution.detect_execution_devices()
@read('/static_execution/devices')
async def read_static_execution_devices() -> List[ExecutionDevice]:
return execution.detect_static_execution_devices()
@stream('/state')
async def stream_state(socket : WebSocket[Any, Any, Any]) -> None:
await socket.accept()
while True:
await socket.send_json(get_state())
await asyncio.sleep(0.5)
@read('/preview', media_type = 'image/png', mode = "binary")
async def read_preview(frame_number : int) -> bytes:
_, preview_vision_frame = cv2.imencode('.png', _preview.process_frame(frame_number)) #type:ignore
return preview_vision_frame.tobytes()
@websocket_listener("/preview", send_mode = "binary")
async def stream_preview(data : str) -> bytes:
frame_number = int(json.loads(data).get('frame_number'))
_, preview_vision_frame = cv2.imencode('.png', _preview.process_frame(frame_number)) #type:ignore
return preview_vision_frame.tobytes()
@read('/ui/preview_slider')
async def read_ui_preview_slider() -> Any:
target_path = state_manager.get_item('target_path')
video_frame_total = vision.count_video_frame_total(target_path)
return\
{
'video_frame_total': video_frame_total
}
api = Litestar(
[
read_choices,
read_processors_choices,
stream_execution_devices,
read_execution_devices,
read_static_execution_devices,
stream_state,
read_preview,
read_ui_preview_slider,
stream_preview,
create_static_files_router(
path = '/frontend',
directories = [ 'facefusion/static' ],
html_mode = True,
)
])
def run() -> None:
uvicorn.run(api)

View File

@ -71,8 +71,6 @@ def apply_args(args : Args, apply_state_item : ApplyStateItem) -> None:
apply_state_item('reference_face_distance', args.get('reference_face_distance')) apply_state_item('reference_face_distance', args.get('reference_face_distance'))
apply_state_item('reference_frame_number', args.get('reference_frame_number')) apply_state_item('reference_frame_number', args.get('reference_frame_number'))
# face masker # face masker
apply_state_item('face_occluder_model', args.get('face_occluder_model'))
apply_state_item('face_parser_model', args.get('face_parser_model'))
apply_state_item('face_mask_types', args.get('face_mask_types')) apply_state_item('face_mask_types', args.get('face_mask_types'))
apply_state_item('face_mask_blur', args.get('face_mask_blur')) apply_state_item('face_mask_blur', args.get('face_mask_blur'))
apply_state_item('face_mask_padding', normalize_padding(args.get('face_mask_padding'))) apply_state_item('face_mask_padding', normalize_padding(args.get('face_mask_padding')))
@ -107,7 +105,7 @@ def apply_args(args : Args, apply_state_item : ApplyStateItem) -> None:
apply_state_item('output_video_fps', output_video_fps) apply_state_item('output_video_fps', output_video_fps)
apply_state_item('skip_audio', args.get('skip_audio')) apply_state_item('skip_audio', args.get('skip_audio'))
# processors # processors
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ] available_processors = list_directory('facefusion/processors/modules')
apply_state_item('processors', args.get('processors')) apply_state_item('processors', args.get('processors'))
for processor_module in get_processors_modules(available_processors): for processor_module in get_processors_modules(available_processors):
processor_module.apply_args(args, apply_state_item) processor_module.apply_args(args, apply_state_item)

View File

@ -2,7 +2,7 @@ import logging
from typing import List, Sequence from typing import List, Sequence
from facefusion.common_helper import create_float_range, create_int_range from facefusion.common_helper import create_float_range, create_int_range
from facefusion.typing import Angle, DownloadProvider, DownloadProviderSet, DownloadScope, ExecutionProvider, ExecutionProviderSet, FaceDetectorModel, FaceDetectorSet, FaceLandmarkerModel, FaceMaskRegion, FaceMaskRegionSet, FaceMaskType, FaceOccluderModel, FaceParserModel, FaceSelectorMode, FaceSelectorOrder, Gender, JobStatus, LogLevel, LogLevelSet, OutputAudioEncoder, OutputVideoEncoder, OutputVideoPreset, Race, Score, TempFrameFormat, UiWorkflow, VideoMemoryStrategy from facefusion.typing import Angle, DownloadProviderSet, DownloadScope, ExecutionProviderSet, FaceDetectorSet, FaceLandmarkerModel, FaceMaskRegion, FaceMaskType, FaceSelectorMode, FaceSelectorOrder, Gender, JobStatus, LogLevelSet, OutputAudioEncoder, OutputVideoEncoder, OutputVideoPreset, Race, Score, TempFrameFormat, UiWorkflow, VideoMemoryStrategy
face_detector_set : FaceDetectorSet =\ face_detector_set : FaceDetectorSet =\
{ {
@ -11,29 +11,13 @@ face_detector_set : FaceDetectorSet =\
'scrfd': [ '160x160', '320x320', '480x480', '512x512', '640x640' ], 'scrfd': [ '160x160', '320x320', '480x480', '512x512', '640x640' ],
'yoloface': [ '640x640' ] 'yoloface': [ '640x640' ]
} }
face_detector_models : List[FaceDetectorModel] = list(face_detector_set.keys())
face_landmarker_models : List[FaceLandmarkerModel] = [ 'many', '2dfan4', 'peppa_wutz' ] face_landmarker_models : List[FaceLandmarkerModel] = [ 'many', '2dfan4', 'peppa_wutz' ]
face_selector_modes : List[FaceSelectorMode] = [ 'many', 'one', 'reference' ] face_selector_modes : List[FaceSelectorMode] = [ 'many', 'one', 'reference' ]
face_selector_orders : List[FaceSelectorOrder] = [ 'left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small', 'best-worst', 'worst-best' ] face_selector_orders : List[FaceSelectorOrder] = [ 'left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small', 'best-worst', 'worst-best' ]
face_selector_genders : List[Gender] = [ 'female', 'male' ] face_selector_genders : List[Gender] = ['female', 'male']
face_selector_races : List[Race] = [ 'white', 'black', 'latino', 'asian', 'indian', 'arabic' ] face_selector_races : List[Race] = ['white', 'black', 'latino', 'asian', 'indian', 'arabic']
face_occluder_models : List[FaceOccluderModel] = [ 'xseg_1', 'xseg_2' ]
face_parser_models : List[FaceParserModel] = [ 'bisenet_resnet_18', 'bisenet_resnet_34' ]
face_mask_types : List[FaceMaskType] = [ 'box', 'occlusion', 'region' ] face_mask_types : List[FaceMaskType] = [ 'box', 'occlusion', 'region' ]
face_mask_region_set : FaceMaskRegionSet =\ face_mask_regions : List[FaceMaskRegion] = [ 'skin', 'left-eyebrow', 'right-eyebrow', 'left-eye', 'right-eye', 'glasses', 'nose', 'mouth', 'upper-lip', 'lower-lip' ]
{
'skin': 1,
'left-eyebrow': 2,
'right-eyebrow': 3,
'left-eye': 4,
'right-eye': 5,
'glasses': 6,
'nose': 10,
'mouth': 11,
'upper-lip': 12,
'lower-lip': 13
}
face_mask_regions : List[FaceMaskRegion] = list(face_mask_region_set.keys())
temp_frame_formats : List[TempFrameFormat] = [ 'bmp', 'jpg', 'png' ] temp_frame_formats : List[TempFrameFormat] = [ 'bmp', 'jpg', 'png' ]
output_audio_encoders : List[OutputAudioEncoder] = [ 'aac', 'libmp3lame', 'libopus', 'libvorbis' ] output_audio_encoders : List[OutputAudioEncoder] = [ 'aac', 'libmp3lame', 'libopus', 'libvorbis' ]
output_video_encoders : List[OutputVideoEncoder] = [ 'libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc', 'h264_amf', 'hevc_amf', 'h264_qsv', 'hevc_qsv', 'h264_videotoolbox', 'hevc_videotoolbox' ] output_video_encoders : List[OutputVideoEncoder] = [ 'libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc', 'h264_amf', 'hevc_amf', 'h264_qsv', 'hevc_qsv', 'h264_videotoolbox', 'hevc_videotoolbox' ]
@ -52,21 +36,11 @@ execution_provider_set : ExecutionProviderSet =\
'rocm': 'ROCMExecutionProvider', 'rocm': 'ROCMExecutionProvider',
'tensorrt': 'TensorrtExecutionProvider' 'tensorrt': 'TensorrtExecutionProvider'
} }
execution_providers : List[ExecutionProvider] = list(execution_provider_set.keys())
download_provider_set : DownloadProviderSet =\ download_provider_set : DownloadProviderSet =\
{ {
'github': 'github': 'https://github.com/facefusion/facefusion-assets/releases/download/{base_name}/{file_name}',
{ 'huggingface': 'https://huggingface.co/facefusion/{base_name}/resolve/main/{file_name}'
'url': 'https://github.com',
'path': '/facefusion/facefusion-assets/releases/download/{base_name}/{file_name}'
},
'huggingface':
{
'url': 'https://huggingface.co',
'path': '/facefusion/{base_name}/resolve/main/{file_name}'
}
} }
download_providers : List[DownloadProvider] = list(download_provider_set.keys())
download_scopes : List[DownloadScope] = [ 'lite', 'full' ] download_scopes : List[DownloadScope] = [ 'lite', 'full' ]
video_memory_strategies : List[VideoMemoryStrategy] = [ 'strict', 'moderate', 'tolerant' ] video_memory_strategies : List[VideoMemoryStrategy] = [ 'strict', 'moderate', 'tolerant' ]
@ -78,7 +52,6 @@ log_level_set : LogLevelSet =\
'info': logging.INFO, 'info': logging.INFO,
'debug': logging.DEBUG 'debug': logging.DEBUG
} }
log_levels : List[LogLevel] = list(log_level_set.keys())
ui_workflows : List[UiWorkflow] = [ 'instant_runner', 'job_runner', 'job_manager' ] ui_workflows : List[UiWorkflow] = [ 'instant_runner', 'job_runner', 'job_manager' ]
job_statuses : List[JobStatus] = [ 'drafted', 'queued', 'completed', 'failed' ] job_statuses : List[JobStatus] = [ 'drafted', 'queued', 'completed', 'failed' ]

View File

@ -9,7 +9,7 @@ from facefusion.download import conditional_download_hashes, conditional_downloa
from facefusion.filesystem import resolve_relative_path from facefusion.filesystem import resolve_relative_path
from facefusion.thread_helper import conditional_thread_semaphore from facefusion.thread_helper import conditional_thread_semaphore
from facefusion.typing import DownloadScope, Fps, InferencePool, ModelOptions, ModelSet, VisionFrame from facefusion.typing import DownloadScope, Fps, InferencePool, ModelOptions, ModelSet, VisionFrame
from facefusion.vision import detect_video_fps, get_video_frame, read_image from facefusion.vision import count_video_frame_total, detect_video_fps, get_video_frame, read_image
PROBABILITY_LIMIT = 0.80 PROBABILITY_LIMIT = 0.80
RATE_LIMIT = 10 RATE_LIMIT = 10
@ -108,9 +108,10 @@ def analyse_image(image_path : str) -> bool:
@lru_cache(maxsize = None) @lru_cache(maxsize = None)
def analyse_video(video_path : str, trim_frame_start : int, trim_frame_end : int) -> bool: def analyse_video(video_path : str, start_frame : int, end_frame : int) -> bool:
video_frame_total = count_video_frame_total(video_path)
video_fps = detect_video_fps(video_path) video_fps = detect_video_fps(video_path)
frame_range = range(trim_frame_start, trim_frame_end) frame_range = range(start_frame or 0, end_frame or video_frame_total)
rate = 0.0 rate = 0.0
counter = 0 counter = 0

View File

@ -2,11 +2,12 @@ import itertools
import shutil import shutil
import signal import signal
import sys import sys
import webbrowser
from time import time from time import time
import numpy import numpy
from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, logger, process_manager, state_manager, voice_extractor, wording from facefusion import api, content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, logger, process_manager, state_manager, voice_extractor, wording
from facefusion.args import apply_args, collect_job_args, reduce_job_args, reduce_step_args from facefusion.args import apply_args, collect_job_args, reduce_job_args, reduce_step_args
from facefusion.common_helper import get_first from facefusion.common_helper import get_first
from facefusion.content_analyser import analyse_image, analyse_video from facefusion.content_analyser import analyse_image, analyse_video
@ -26,7 +27,7 @@ from facefusion.program_helper import validate_args
from facefusion.statistics import conditional_log_statistics from facefusion.statistics import conditional_log_statistics
from facefusion.temp_helper import clear_temp_directory, create_temp_directory, get_temp_file_path, get_temp_frame_paths, move_temp_file from facefusion.temp_helper import clear_temp_directory, create_temp_directory, get_temp_file_path, get_temp_frame_paths, move_temp_file
from facefusion.typing import Args, ErrorCode from facefusion.typing import Args, ErrorCode
from facefusion.vision import get_video_frame, pack_resolution, read_image, read_static_images, restrict_image_resolution, restrict_trim_frame, restrict_video_fps, restrict_video_resolution, unpack_resolution from facefusion.vision import get_video_frame, pack_resolution, read_image, read_static_images, restrict_image_resolution, restrict_video_fps, restrict_video_resolution, unpack_resolution
def cli() -> None: def cli() -> None:
@ -61,15 +62,10 @@ def route(args : Args) -> None:
if not pre_check(): if not pre_check():
return conditional_exit(2) return conditional_exit(2)
if state_manager.get_item('command') == 'run': if state_manager.get_item('command') == 'run':
import facefusion.uis.core as ui if state_manager.get_item('open_browser'):
webbrowser.open('http://127.0.0.1:8000/frontend')
if not common_pre_check() or not processors_pre_check(): logger.info('http://127.0.0.1:8000/frontend', __name__)
return conditional_exit(2) api.run()
for ui_layout in ui.get_ui_layouts_modules(state_manager.get_item('ui_layouts')):
if not ui_layout.pre_check():
return conditional_exit(2)
ui.init()
ui.launch()
if state_manager.get_item('command') == 'headless-run': if state_manager.get_item('command') == 'headless-run':
if not job_manager.init_jobs(state_manager.get_item('jobs_path')): if not job_manager.init_jobs(state_manager.get_item('jobs_path')):
hard_exit(1) hard_exit(1)
@ -133,7 +129,7 @@ def force_download() -> ErrorCode:
face_recognizer, face_recognizer,
voice_extractor voice_extractor
] ]
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ] available_processors = list_directory('facefusion/processors/modules')
processor_modules = get_processors_modules(available_processors) processor_modules = get_processors_modules(available_processors)
for module in common_modules + processor_modules: for module in common_modules + processor_modules:
@ -389,8 +385,7 @@ def process_image(start_time : float) -> ErrorCode:
def process_video(start_time : float) -> ErrorCode: def process_video(start_time : float) -> ErrorCode:
trim_frame_start, trim_frame_end = restrict_trim_frame(state_manager.get_item('target_path'), state_manager.get_item('trim_frame_start'), state_manager.get_item('trim_frame_end')) if analyse_video(state_manager.get_item('target_path'), state_manager.get_item('trim_frame_start'), state_manager.get_item('trim_frame_end')):
if analyse_video(state_manager.get_item('target_path'), trim_frame_start, trim_frame_end):
return 3 return 3
# clear temp # clear temp
logger.debug(wording.get('clearing_temp'), __name__) logger.debug(wording.get('clearing_temp'), __name__)
@ -403,7 +398,7 @@ def process_video(start_time : float) -> ErrorCode:
temp_video_resolution = pack_resolution(restrict_video_resolution(state_manager.get_item('target_path'), unpack_resolution(state_manager.get_item('output_video_resolution')))) temp_video_resolution = pack_resolution(restrict_video_resolution(state_manager.get_item('target_path'), unpack_resolution(state_manager.get_item('output_video_resolution'))))
temp_video_fps = restrict_video_fps(state_manager.get_item('target_path'), state_manager.get_item('output_video_fps')) temp_video_fps = restrict_video_fps(state_manager.get_item('target_path'), state_manager.get_item('output_video_fps'))
logger.info(wording.get('extracting_frames').format(resolution = temp_video_resolution, fps = temp_video_fps), __name__) logger.info(wording.get('extracting_frames').format(resolution = temp_video_resolution, fps = temp_video_fps), __name__)
if extract_frames(state_manager.get_item('target_path'), temp_video_resolution, temp_video_fps, trim_frame_start, trim_frame_end): if extract_frames(state_manager.get_item('target_path'), temp_video_resolution, temp_video_fps):
logger.debug(wording.get('extracting_frames_succeed'), __name__) logger.debug(wording.get('extracting_frames_succeed'), __name__)
else: else:
if is_process_stopping(): if is_process_stopping():
@ -452,7 +447,7 @@ def process_video(start_time : float) -> ErrorCode:
logger.warn(wording.get('replacing_audio_skipped'), __name__) logger.warn(wording.get('replacing_audio_skipped'), __name__)
move_temp_file(state_manager.get_item('target_path'), state_manager.get_item('output_path')) move_temp_file(state_manager.get_item('target_path'), state_manager.get_item('output_path'))
else: else:
if restore_audio(state_manager.get_item('target_path'), state_manager.get_item('output_path'), state_manager.get_item('output_video_fps'), trim_frame_start, trim_frame_end): if restore_audio(state_manager.get_item('target_path'), state_manager.get_item('output_path'), state_manager.get_item('output_video_fps')):
logger.debug(wording.get('restoring_audio_succeed'), __name__) logger.debug(wording.get('restoring_audio_succeed'), __name__)
else: else:
if is_process_stopping(): if is_process_stopping():

View File

@ -1,23 +1,23 @@
import os import os
import shutil import shutil
import ssl
import subprocess import subprocess
import urllib.request
from functools import lru_cache from functools import lru_cache
from typing import List, Optional, Tuple from typing import List, Optional, Tuple
from urllib.parse import urlparse from urllib.parse import urlparse
from tqdm import tqdm from tqdm import tqdm
import facefusion.choices
from facefusion import logger, process_manager, state_manager, wording from facefusion import logger, process_manager, state_manager, wording
from facefusion.choices import download_provider_set
from facefusion.common_helper import is_macos
from facefusion.filesystem import get_file_size, is_file, remove_file from facefusion.filesystem import get_file_size, is_file, remove_file
from facefusion.hash_helper import validate_hash from facefusion.hash_helper import validate_hash
from facefusion.typing import DownloadProvider, DownloadSet from facefusion.typing import DownloadProviderKey, DownloadSet
if is_macos():
def open_curl(args : List[str]) -> subprocess.Popen[bytes]: ssl._create_default_https_context = ssl._create_unverified_context
commands = [ shutil.which('curl'), '--silent', '--insecure', '--location' ]
commands.extend(args)
return subprocess.Popen(commands, stdin = subprocess.PIPE, stdout = subprocess.PIPE)
def conditional_download(download_directory_path : str, urls : List[str]) -> None: def conditional_download(download_directory_path : str, urls : List[str]) -> None:
@ -25,15 +25,13 @@ def conditional_download(download_directory_path : str, urls : List[str]) -> Non
download_file_name = os.path.basename(urlparse(url).path) download_file_name = os.path.basename(urlparse(url).path)
download_file_path = os.path.join(download_directory_path, download_file_name) download_file_path = os.path.join(download_directory_path, download_file_name)
initial_size = get_file_size(download_file_path) initial_size = get_file_size(download_file_path)
download_size = get_static_download_size(url) download_size = get_download_size(url)
if initial_size < download_size: if initial_size < download_size:
with tqdm(total = download_size, initial = initial_size, desc = wording.get('downloading'), unit = 'B', unit_scale = True, unit_divisor = 1024, ascii = ' =', disable = state_manager.get_item('log_level') in [ 'warn', 'error' ]) as progress: with tqdm(total = download_size, initial = initial_size, desc = wording.get('downloading'), unit = 'B', unit_scale = True, unit_divisor = 1024, ascii = ' =', disable = state_manager.get_item('log_level') in [ 'warn', 'error' ]) as progress:
commands = [ '--create-dirs', '--continue-at', '-', '--output', download_file_path, url ] subprocess.Popen([ shutil.which('curl'), '--create-dirs', '--silent', '--insecure', '--location', '--continue-at', '-', '--output', download_file_path, url ])
open_curl(commands)
current_size = initial_size current_size = initial_size
progress.set_postfix(download_providers = state_manager.get_item('download_providers'), file_name = download_file_name) progress.set_postfix(download_providers = state_manager.get_item('download_providers'), file_name = download_file_name)
while current_size < download_size: while current_size < download_size:
if is_file(download_file_path): if is_file(download_file_path):
current_size = get_file_size(download_file_path) current_size = get_file_size(download_file_path)
@ -41,26 +39,13 @@ def conditional_download(download_directory_path : str, urls : List[str]) -> Non
@lru_cache(maxsize = None) @lru_cache(maxsize = None)
def get_static_download_size(url : str) -> int: def get_download_size(url : str) -> int:
commands = [ '-I', url ] try:
process = open_curl(commands) response = urllib.request.urlopen(url, timeout = 10)
lines = reversed(process.stdout.readlines()) content_length = response.headers.get('Content-Length')
return int(content_length)
for line in lines: except (OSError, TypeError, ValueError):
__line__ = line.decode().lower() return 0
if 'content-length:' in __line__:
_, content_length = __line__.split('content-length:')
return int(content_length)
return 0
@lru_cache(maxsize = None)
def ping_static_url(url : str) -> bool:
commands = [ '-I', url ]
process = open_curl(commands)
process.communicate()
return process.returncode == 0
def conditional_download_hashes(hashes : DownloadSet) -> bool: def conditional_download_hashes(hashes : DownloadSet) -> bool:
@ -72,12 +57,10 @@ def conditional_download_hashes(hashes : DownloadSet) -> bool:
for index in hashes: for index in hashes:
if hashes.get(index).get('path') in invalid_hash_paths: if hashes.get(index).get('path') in invalid_hash_paths:
invalid_hash_url = hashes.get(index).get('url') invalid_hash_url = hashes.get(index).get('url')
if invalid_hash_url: download_directory_path = os.path.dirname(hashes.get(index).get('path'))
download_directory_path = os.path.dirname(hashes.get(index).get('path')) conditional_download(download_directory_path, [ invalid_hash_url ])
conditional_download(download_directory_path, [ invalid_hash_url ])
valid_hash_paths, invalid_hash_paths = validate_hash_paths(hash_paths) valid_hash_paths, invalid_hash_paths = validate_hash_paths(hash_paths)
for valid_hash_path in valid_hash_paths: for valid_hash_path in valid_hash_paths:
valid_hash_file_name, _ = os.path.splitext(os.path.basename(valid_hash_path)) valid_hash_file_name, _ = os.path.splitext(os.path.basename(valid_hash_path))
logger.debug(wording.get('validating_hash_succeed').format(hash_file_name = valid_hash_file_name), __name__) logger.debug(wording.get('validating_hash_succeed').format(hash_file_name = valid_hash_file_name), __name__)
@ -99,12 +82,10 @@ def conditional_download_sources(sources : DownloadSet) -> bool:
for index in sources: for index in sources:
if sources.get(index).get('path') in invalid_source_paths: if sources.get(index).get('path') in invalid_source_paths:
invalid_source_url = sources.get(index).get('url') invalid_source_url = sources.get(index).get('url')
if invalid_source_url: download_directory_path = os.path.dirname(sources.get(index).get('path'))
download_directory_path = os.path.dirname(sources.get(index).get('path')) conditional_download(download_directory_path, [ invalid_source_url ])
conditional_download(download_directory_path, [ invalid_source_url ])
valid_source_paths, invalid_source_paths = validate_source_paths(source_paths) valid_source_paths, invalid_source_paths = validate_source_paths(source_paths)
for valid_source_path in valid_source_paths: for valid_source_path in valid_source_paths:
valid_source_file_name, _ = os.path.splitext(os.path.basename(valid_source_path)) valid_source_file_name, _ = os.path.splitext(os.path.basename(valid_source_path))
logger.debug(wording.get('validating_source_succeed').format(source_file_name = valid_source_file_name), __name__) logger.debug(wording.get('validating_source_succeed').format(source_file_name = valid_source_file_name), __name__)
@ -147,17 +128,11 @@ def validate_source_paths(source_paths : List[str]) -> Tuple[List[str], List[str
def resolve_download_url(base_name : str, file_name : str) -> Optional[str]: def resolve_download_url(base_name : str, file_name : str) -> Optional[str]:
download_providers = state_manager.get_item('download_providers') download_providers = state_manager.get_item('download_providers')
for download_provider in download_providers: for download_provider in download_provider_set:
if ping_download_provider(download_provider): if download_provider in download_providers:
return resolve_download_url_by_provider(download_provider, base_name, file_name) return resolve_download_url_by_provider(download_provider, base_name, file_name)
return None return None
def ping_download_provider(download_provider : DownloadProvider) -> bool: def resolve_download_url_by_provider(download_provider : DownloadProviderKey, base_name : str, file_name : str) -> Optional[str]:
download_provider_value = facefusion.choices.download_provider_set.get(download_provider) return download_provider_set.get(download_provider).format(base_name = base_name, file_name = file_name)
return ping_static_url(download_provider_value.get('url'))
def resolve_download_url_by_provider(download_provider : DownloadProvider, base_name : str, file_name : str) -> Optional[str]:
download_provider_value = facefusion.choices.download_provider_set.get(download_provider)
return download_provider_value.get('url') + download_provider_value.get('path').format(base_name = base_name, file_name = file_name)

View File

@ -6,38 +6,37 @@ from typing import Any, List, Optional
from onnxruntime import get_available_providers, set_default_logger_severity from onnxruntime import get_available_providers, set_default_logger_severity
import facefusion.choices from facefusion.choices import execution_provider_set
from facefusion.typing import ExecutionDevice, ExecutionProvider, ValueAndUnit from facefusion.typing import ExecutionDevice, ExecutionProviderKey, ExecutionProviderSet, ValueAndUnit
set_default_logger_severity(3) set_default_logger_severity(3)
def has_execution_provider(execution_provider : ExecutionProvider) -> bool: def has_execution_provider(execution_provider_key : ExecutionProviderKey) -> bool:
return execution_provider in get_available_execution_providers() return execution_provider_key in get_execution_provider_set().keys()
def get_available_execution_providers() -> List[ExecutionProvider]: def get_execution_provider_set() -> ExecutionProviderSet:
inference_execution_providers = get_available_providers() available_execution_providers = get_available_providers()
available_execution_providers = [] available_execution_provider_set : ExecutionProviderSet = {}
for execution_provider, execution_provider_value in facefusion.choices.execution_provider_set.items(): for execution_provider_key, execution_provider_value in execution_provider_set.items():
if execution_provider_value in inference_execution_providers: if execution_provider_value in available_execution_providers:
available_execution_providers.append(execution_provider) available_execution_provider_set[execution_provider_key] = execution_provider_value
return available_execution_provider_set
return available_execution_providers
def create_inference_execution_providers(execution_device_id : str, execution_providers : List[ExecutionProvider]) -> List[Any]: def create_execution_providers(execution_device_id : str, execution_provider_keys : List[ExecutionProviderKey]) -> List[Any]:
inference_execution_providers : List[Any] = [] execution_providers : List[Any] = []
for execution_provider in execution_providers: for execution_provider_key in execution_provider_keys:
if execution_provider == 'cuda': if execution_provider_key == 'cuda':
inference_execution_providers.append((facefusion.choices.execution_provider_set.get(execution_provider), execution_providers.append((execution_provider_set.get(execution_provider_key),
{ {
'device_id': execution_device_id 'device_id': execution_device_id
})) }))
if execution_provider == 'tensorrt': if execution_provider_key == 'tensorrt':
inference_execution_providers.append((facefusion.choices.execution_provider_set.get(execution_provider), execution_providers.append((execution_provider_set.get(execution_provider_key),
{ {
'device_id': execution_device_id, 'device_id': execution_device_id,
'trt_engine_cache_enable': True, 'trt_engine_cache_enable': True,
@ -46,24 +45,24 @@ def create_inference_execution_providers(execution_device_id : str, execution_pr
'trt_timing_cache_path': '.caches', 'trt_timing_cache_path': '.caches',
'trt_builder_optimization_level': 5 'trt_builder_optimization_level': 5
})) }))
if execution_provider == 'openvino': if execution_provider_key == 'openvino':
inference_execution_providers.append((facefusion.choices.execution_provider_set.get(execution_provider), execution_providers.append((execution_provider_set.get(execution_provider_key),
{ {
'device_type': 'GPU' if execution_device_id == '0' else 'GPU.' + execution_device_id, 'device_type': 'GPU' if execution_device_id == '0' else 'GPU.' + execution_device_id,
'precision': 'FP32' 'precision': 'FP32'
})) }))
if execution_provider in [ 'directml', 'rocm' ]: if execution_provider_key in [ 'directml', 'rocm' ]:
inference_execution_providers.append((facefusion.choices.execution_provider_set.get(execution_provider), execution_providers.append((execution_provider_set.get(execution_provider_key),
{ {
'device_id': execution_device_id 'device_id': execution_device_id
})) }))
if execution_provider == 'coreml': if execution_provider_key == 'coreml':
inference_execution_providers.append(facefusion.choices.execution_provider_set.get(execution_provider)) execution_providers.append(execution_provider_set.get(execution_provider_key))
if 'cpu' in execution_providers: if 'cpu' in execution_provider_keys:
inference_execution_providers.append(facefusion.choices.execution_provider_set.get('cpu')) execution_providers.append(execution_provider_set.get('cpu'))
return inference_execution_providers return execution_providers
def run_nvidia_smi() -> subprocess.Popen[bytes]: def run_nvidia_smi() -> subprocess.Popen[bytes]:

View File

@ -1,4 +1,3 @@
import signal
import sys import sys
from time import sleep from time import sleep
@ -8,7 +7,6 @@ from facefusion.typing import ErrorCode
def hard_exit(error_code : ErrorCode) -> None: def hard_exit(error_code : ErrorCode) -> None:
signal.signal(signal.SIGINT, signal.SIG_IGN)
sys.exit(error_code) sys.exit(error_code)

View File

@ -79,11 +79,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
_, model_sources = collect_model_downloads() _, model_sources = collect_model_downloads()
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('face_detector_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('face_detector_model')
inference_manager.clear_inference_pool(model_context)
def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]: def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]:
@ -94,15 +96,12 @@ def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]:
if state_manager.get_item('face_detector_model') in [ 'many', 'retinaface' ]: if state_manager.get_item('face_detector_model') in [ 'many', 'retinaface' ]:
model_hashes['retinaface'] = model_set.get('retinaface').get('hashes').get('retinaface') model_hashes['retinaface'] = model_set.get('retinaface').get('hashes').get('retinaface')
model_sources['retinaface'] = model_set.get('retinaface').get('sources').get('retinaface') model_sources['retinaface'] = model_set.get('retinaface').get('sources').get('retinaface')
if state_manager.get_item('face_detector_model') in [ 'many', 'scrfd' ]: if state_manager.get_item('face_detector_model') in [ 'many', 'scrfd' ]:
model_hashes['scrfd'] = model_set.get('scrfd').get('hashes').get('scrfd') model_hashes['scrfd'] = model_set.get('scrfd').get('hashes').get('scrfd')
model_sources['scrfd'] = model_set.get('scrfd').get('sources').get('scrfd') model_sources['scrfd'] = model_set.get('scrfd').get('sources').get('scrfd')
if state_manager.get_item('face_detector_model') in [ 'many', 'yoloface' ]: if state_manager.get_item('face_detector_model') in [ 'many', 'yoloface' ]:
model_hashes['yoloface'] = model_set.get('yoloface').get('hashes').get('yoloface') model_hashes['yoloface'] = model_set.get('yoloface').get('hashes').get('yoloface')
model_sources['yoloface'] = model_set.get('yoloface').get('sources').get('yoloface') model_sources['yoloface'] = model_set.get('yoloface').get('sources').get('yoloface')
return model_hashes, model_sources return model_hashes, model_sources

View File

@ -80,11 +80,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
_, model_sources = collect_model_downloads() _, model_sources = collect_model_downloads()
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('face_landmarker_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('face_landmarker_model')
inference_manager.clear_inference_pool(model_context)
def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]: def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]:
@ -101,11 +103,9 @@ def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]:
if state_manager.get_item('face_landmarker_model') in [ 'many', '2dfan4' ]: if state_manager.get_item('face_landmarker_model') in [ 'many', '2dfan4' ]:
model_hashes['2dfan4'] = model_set.get('2dfan4').get('hashes').get('2dfan4') model_hashes['2dfan4'] = model_set.get('2dfan4').get('hashes').get('2dfan4')
model_sources['2dfan4'] = model_set.get('2dfan4').get('sources').get('2dfan4') model_sources['2dfan4'] = model_set.get('2dfan4').get('sources').get('2dfan4')
if state_manager.get_item('face_landmarker_model') in [ 'many', 'peppa_wutz' ]: if state_manager.get_item('face_landmarker_model') in [ 'many', 'peppa_wutz' ]:
model_hashes['peppa_wutz'] = model_set.get('peppa_wutz').get('hashes').get('peppa_wutz') model_hashes['peppa_wutz'] = model_set.get('peppa_wutz').get('hashes').get('peppa_wutz')
model_sources['peppa_wutz'] = model_set.get('peppa_wutz').get('sources').get('peppa_wutz') model_sources['peppa_wutz'] = model_set.get('peppa_wutz').get('sources').get('peppa_wutz')
return model_hashes, model_sources return model_hashes, model_sources
@ -123,7 +123,6 @@ def detect_face_landmarks(vision_frame : VisionFrame, bounding_box : BoundingBox
if state_manager.get_item('face_landmarker_model') in [ 'many', '2dfan4' ]: if state_manager.get_item('face_landmarker_model') in [ 'many', '2dfan4' ]:
face_landmark_2dfan4, face_landmark_score_2dfan4 = detect_with_2dfan4(vision_frame, bounding_box, face_angle) face_landmark_2dfan4, face_landmark_score_2dfan4 = detect_with_2dfan4(vision_frame, bounding_box, face_angle)
if state_manager.get_item('face_landmarker_model') in [ 'many', 'peppa_wutz' ]: if state_manager.get_item('face_landmarker_model') in [ 'many', 'peppa_wutz' ]:
face_landmark_peppa_wutz, face_landmark_score_peppa_wutz = detect_with_peppa_wutz(vision_frame, bounding_box, face_angle) face_landmark_peppa_wutz, face_landmark_score_peppa_wutz = detect_with_peppa_wutz(vision_frame, bounding_box, face_angle)

View File

@ -1,83 +1,56 @@
from functools import lru_cache from functools import lru_cache
from typing import List, Tuple from typing import Dict, List, Tuple
import cv2 import cv2
import numpy import numpy
from cv2.typing import Size from cv2.typing import Size
import facefusion.choices from facefusion import inference_manager
from facefusion import inference_manager, state_manager
from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url
from facefusion.filesystem import resolve_relative_path from facefusion.filesystem import resolve_relative_path
from facefusion.thread_helper import conditional_thread_semaphore from facefusion.thread_helper import conditional_thread_semaphore
from facefusion.typing import DownloadScope, DownloadSet, FaceLandmark68, FaceMaskRegion, InferencePool, Mask, ModelSet, Padding, VisionFrame from facefusion.typing import DownloadScope, DownloadSet, FaceLandmark68, FaceMaskRegion, InferencePool, Mask, ModelSet, Padding, VisionFrame
FACE_MASK_REGIONS : Dict[FaceMaskRegion, int] =\
{
'skin': 1,
'left-eyebrow': 2,
'right-eyebrow': 3,
'left-eye': 4,
'right-eye': 5,
'glasses': 6,
'nose': 10,
'mouth': 11,
'upper-lip': 12,
'lower-lip': 13
}
@lru_cache(maxsize = None) @lru_cache(maxsize = None)
def create_static_model_set(download_scope : DownloadScope) -> ModelSet: def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
return\ return\
{ {
'xseg_1': 'face_occluder':
{ {
'hashes': 'hashes':
{ {
'face_occluder': 'face_occluder':
{ {
'url': resolve_download_url('models-3.1.0', 'xseg_1.hash'), 'url': resolve_download_url('models-3.1.0', 'xseg_groggy_5.hash'),
'path': resolve_relative_path('../.assets/models/xseg_1.hash') 'path': resolve_relative_path('../.assets/models/xseg_groggy_5.hash')
} }
}, },
'sources': 'sources':
{ {
'face_occluder': 'face_occluder':
{ {
'url': resolve_download_url('models-3.1.0', 'xseg_1.onnx'), 'url': resolve_download_url('models-3.1.0', 'xseg_groggy_5.onnx'),
'path': resolve_relative_path('../.assets/models/xseg_1.onnx') 'path': resolve_relative_path('../.assets/models/xseg_groggy_5.onnx')
} }
}, },
'size': (256, 256) 'size': (256, 256)
}, },
'xseg_2': 'face_parser':
{
'hashes':
{
'face_occluder':
{
'url': resolve_download_url('models-3.1.0', 'xseg_2.hash'),
'path': resolve_relative_path('../.assets/models/xseg_2.hash')
}
},
'sources':
{
'face_occluder':
{
'url': resolve_download_url('models-3.1.0', 'xseg_2.onnx'),
'path': resolve_relative_path('../.assets/models/xseg_2.onnx')
}
},
'size': (256, 256)
},
'bisenet_resnet_18':
{
'hashes':
{
'face_parser':
{
'url': resolve_download_url('models-3.1.0', 'bisenet_resnet_18.hash'),
'path': resolve_relative_path('../.assets/models/bisenet_resnet_18.hash')
}
},
'sources':
{
'face_parser':
{
'url': resolve_download_url('models-3.1.0', 'bisenet_resnet_18.onnx'),
'path': resolve_relative_path('../.assets/models/bisenet_resnet_18.onnx')
}
},
'size': (512, 512)
},
'bisenet_resnet_34':
{ {
'hashes': 'hashes':
{ {
@ -110,26 +83,17 @@ def clear_inference_pool() -> None:
def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]: def collect_model_downloads() -> Tuple[DownloadSet, DownloadSet]:
model_hashes = {}
model_sources = {}
model_set = create_static_model_set('full') model_set = create_static_model_set('full')
model_hashes =\
if state_manager.get_item('face_occluder_model') == 'xseg_1': {
model_hashes['xseg_1'] = model_set.get('xseg_1').get('hashes').get('face_occluder') 'face_occluder': model_set.get('face_occluder').get('hashes').get('face_occluder'),
model_sources['xseg_1'] = model_set.get('xseg_1').get('sources').get('face_occluder') 'face_parser': model_set.get('face_parser').get('hashes').get('face_parser')
}
if state_manager.get_item('face_occluder_model') == 'xseg_2': model_sources =\
model_hashes['xseg_2'] = model_set.get('xseg_2').get('hashes').get('face_occluder') {
model_sources['xseg_2'] = model_set.get('xseg_2').get('sources').get('face_occluder') 'face_occluder': model_set.get('face_occluder').get('sources').get('face_occluder'),
'face_parser': model_set.get('face_parser').get('sources').get('face_parser')
if state_manager.get_item('face_parser_model') == 'bisenet_resnet_18': }
model_hashes['bisenet_resnet_18'] = model_set.get('bisenet_resnet_18').get('hashes').get('face_parser')
model_sources['bisenet_resnet_18'] = model_set.get('bisenet_resnet_18').get('sources').get('face_parser')
if state_manager.get_item('face_parser_model') == 'bisenet_resnet_34':
model_hashes['bisenet_resnet_34'] = model_set.get('bisenet_resnet_34').get('hashes').get('face_parser')
model_sources['bisenet_resnet_34'] = model_set.get('bisenet_resnet_34').get('sources').get('face_parser')
return model_hashes, model_sources return model_hashes, model_sources
@ -154,8 +118,7 @@ def create_static_box_mask(crop_size : Size, face_mask_blur : float, face_mask_p
def create_occlusion_mask(crop_vision_frame : VisionFrame) -> Mask: def create_occlusion_mask(crop_vision_frame : VisionFrame) -> Mask:
face_occluder_model = state_manager.get_item('face_occluder_model') model_size = create_static_model_set('full').get('face_occluder').get('size')
model_size = create_static_model_set('full').get(face_occluder_model).get('size')
prepare_vision_frame = cv2.resize(crop_vision_frame, model_size) prepare_vision_frame = cv2.resize(crop_vision_frame, model_size)
prepare_vision_frame = numpy.expand_dims(prepare_vision_frame, axis = 0).astype(numpy.float32) / 255 prepare_vision_frame = numpy.expand_dims(prepare_vision_frame, axis = 0).astype(numpy.float32) / 255
prepare_vision_frame = prepare_vision_frame.transpose(0, 1, 2, 3) prepare_vision_frame = prepare_vision_frame.transpose(0, 1, 2, 3)
@ -167,8 +130,7 @@ def create_occlusion_mask(crop_vision_frame : VisionFrame) -> Mask:
def create_region_mask(crop_vision_frame : VisionFrame, face_mask_regions : List[FaceMaskRegion]) -> Mask: def create_region_mask(crop_vision_frame : VisionFrame, face_mask_regions : List[FaceMaskRegion]) -> Mask:
face_parser_model = state_manager.get_item('face_parser_model') model_size = create_static_model_set('full').get('face_parser').get('size')
model_size = create_static_model_set('full').get(face_parser_model).get('size')
prepare_vision_frame = cv2.resize(crop_vision_frame, model_size) prepare_vision_frame = cv2.resize(crop_vision_frame, model_size)
prepare_vision_frame = prepare_vision_frame[:, :, ::-1].astype(numpy.float32) / 255 prepare_vision_frame = prepare_vision_frame[:, :, ::-1].astype(numpy.float32) / 255
prepare_vision_frame = numpy.subtract(prepare_vision_frame, numpy.array([ 0.485, 0.456, 0.406 ]).astype(numpy.float32)) prepare_vision_frame = numpy.subtract(prepare_vision_frame, numpy.array([ 0.485, 0.456, 0.406 ]).astype(numpy.float32))
@ -176,7 +138,7 @@ def create_region_mask(crop_vision_frame : VisionFrame, face_mask_regions : List
prepare_vision_frame = numpy.expand_dims(prepare_vision_frame, axis = 0) prepare_vision_frame = numpy.expand_dims(prepare_vision_frame, axis = 0)
prepare_vision_frame = prepare_vision_frame.transpose(0, 3, 1, 2) prepare_vision_frame = prepare_vision_frame.transpose(0, 3, 1, 2)
region_mask = forward_parse_face(prepare_vision_frame) region_mask = forward_parse_face(prepare_vision_frame)
region_mask = numpy.isin(region_mask.argmax(0), [ facefusion.choices.face_mask_region_set.get(face_mask_region) for face_mask_region in face_mask_regions ]) region_mask = numpy.isin(region_mask.argmax(0), [ FACE_MASK_REGIONS[region] for region in face_mask_regions ])
region_mask = cv2.resize(region_mask.astype(numpy.float32), crop_vision_frame.shape[:2][::-1]) region_mask = cv2.resize(region_mask.astype(numpy.float32), crop_vision_frame.shape[:2][::-1])
region_mask = (cv2.GaussianBlur(region_mask.clip(0, 1), (0, 0), 5).clip(0.5, 1) - 0.5) * 2 region_mask = (cv2.GaussianBlur(region_mask.clip(0, 1), (0, 0), 5).clip(0.5, 1) - 0.5) * 2
return region_mask return region_mask
@ -192,8 +154,7 @@ def create_mouth_mask(face_landmark_68 : FaceLandmark68) -> Mask:
def forward_occlude_face(prepare_vision_frame : VisionFrame) -> Mask: def forward_occlude_face(prepare_vision_frame : VisionFrame) -> Mask:
face_occluder_model = state_manager.get_item('face_occluder_model') face_occluder = get_inference_pool().get('face_occluder')
face_occluder = get_inference_pool().get(face_occluder_model)
with conditional_thread_semaphore(): with conditional_thread_semaphore():
occlusion_mask : Mask = face_occluder.run(None, occlusion_mask : Mask = face_occluder.run(None,
@ -205,8 +166,7 @@ def forward_occlude_face(prepare_vision_frame : VisionFrame) -> Mask:
def forward_parse_face(prepare_vision_frame : VisionFrame) -> Mask: def forward_parse_face(prepare_vision_frame : VisionFrame) -> Mask:
face_parser_model = state_manager.get_item('face_parser_model') face_parser = get_inference_pool().get('face_parser')
face_parser = get_inference_pool().get(face_parser_model)
with conditional_thread_semaphore(): with conditional_thread_semaphore():
region_mask : Mask = face_parser.run(None, region_mask : Mask = face_parser.run(None,

View File

@ -11,7 +11,7 @@ from facefusion import logger, process_manager, state_manager, wording
from facefusion.filesystem import remove_file from facefusion.filesystem import remove_file
from facefusion.temp_helper import get_temp_file_path, get_temp_frame_paths, get_temp_frames_pattern from facefusion.temp_helper import get_temp_file_path, get_temp_frame_paths, get_temp_frames_pattern
from facefusion.typing import AudioBuffer, Fps, OutputVideoPreset, UpdateProgress from facefusion.typing import AudioBuffer, Fps, OutputVideoPreset, UpdateProgress
from facefusion.vision import count_trim_frame_total, detect_video_duration, restrict_video_fps from facefusion.vision import count_video_frame_total, detect_video_duration, restrict_video_fps
def run_ffmpeg_with_progress(args: List[str], update_progress : UpdateProgress) -> subprocess.Popen[bytes]: def run_ffmpeg_with_progress(args: List[str], update_progress : UpdateProgress) -> subprocess.Popen[bytes]:
@ -22,12 +22,10 @@ def run_ffmpeg_with_progress(args: List[str], update_progress : UpdateProgress)
while process_manager.is_processing(): while process_manager.is_processing():
try: try:
while line := process.stdout.readline().decode():
while __line__ := process.stdout.readline().decode().lower(): if 'frame=' in line:
if 'frame=' in __line__: _, frame_number = line.split('frame=')
_, frame_number = __line__.split('frame=')
update_progress(int(frame_number)) update_progress(int(frame_number))
if log_level == 'debug': if log_level == 'debug':
log_debug(process) log_debug(process)
process.wait(timeout = 0.5) process.wait(timeout = 0.5)
@ -75,17 +73,22 @@ def log_debug(process : subprocess.Popen[bytes]) -> None:
logger.debug(error.strip(), __name__) logger.debug(error.strip(), __name__)
def extract_frames(target_path : str, temp_video_resolution : str, temp_video_fps : Fps, trim_frame_start : int, trim_frame_end : int) -> bool: def extract_frames(target_path : str, temp_video_resolution : str, temp_video_fps : Fps) -> bool:
extract_frame_total = count_trim_frame_total(target_path, trim_frame_start, trim_frame_end) extract_frame_total = count_video_frame_total(state_manager.get_item('target_path'))
trim_frame_start = state_manager.get_item('trim_frame_start')
trim_frame_end = state_manager.get_item('trim_frame_end')
temp_frames_pattern = get_temp_frames_pattern(target_path, '%08d') temp_frames_pattern = get_temp_frames_pattern(target_path, '%08d')
commands = [ '-i', target_path, '-s', str(temp_video_resolution), '-q:v', '0' ] commands = [ '-i', target_path, '-s', str(temp_video_resolution), '-q:v', '0' ]
if isinstance(trim_frame_start, int) and isinstance(trim_frame_end, int): if isinstance(trim_frame_start, int) and isinstance(trim_frame_end, int):
commands.extend([ '-vf', 'trim=start_frame=' + str(trim_frame_start) + ':end_frame=' + str(trim_frame_end) + ',fps=' + str(temp_video_fps) ]) commands.extend([ '-vf', 'trim=start_frame=' + str(trim_frame_start) + ':end_frame=' + str(trim_frame_end) + ',fps=' + str(temp_video_fps) ])
extract_frame_total = trim_frame_end - trim_frame_start
elif isinstance(trim_frame_start, int): elif isinstance(trim_frame_start, int):
commands.extend([ '-vf', 'trim=start_frame=' + str(trim_frame_start) + ',fps=' + str(temp_video_fps) ]) commands.extend([ '-vf', 'trim=start_frame=' + str(trim_frame_start) + ',fps=' + str(temp_video_fps) ])
extract_frame_total -= trim_frame_start
elif isinstance(trim_frame_end, int): elif isinstance(trim_frame_end, int):
commands.extend([ '-vf', 'trim=end_frame=' + str(trim_frame_end) + ',fps=' + str(temp_video_fps) ]) commands.extend([ '-vf', 'trim=end_frame=' + str(trim_frame_end) + ',fps=' + str(temp_video_fps) ])
extract_frame_total -= trim_frame_end
else: else:
commands.extend([ '-vf', 'fps=' + str(temp_video_fps) ]) commands.extend([ '-vf', 'fps=' + str(temp_video_fps) ])
commands.extend([ '-vsync', '0', temp_frames_pattern ]) commands.extend([ '-vsync', '0', temp_frames_pattern ])
@ -96,10 +99,10 @@ def extract_frames(target_path : str, temp_video_resolution : str, temp_video_fp
def merge_video(target_path : str, output_video_resolution : str, output_video_fps: Fps) -> bool: def merge_video(target_path : str, output_video_resolution : str, output_video_fps: Fps) -> bool:
merge_frame_total = len(get_temp_frame_paths(target_path))
output_video_encoder = state_manager.get_item('output_video_encoder') output_video_encoder = state_manager.get_item('output_video_encoder')
output_video_quality = state_manager.get_item('output_video_quality') output_video_quality = state_manager.get_item('output_video_quality')
output_video_preset = state_manager.get_item('output_video_preset') output_video_preset = state_manager.get_item('output_video_preset')
merge_frame_total = len(get_temp_frame_paths(target_path))
temp_video_fps = restrict_video_fps(target_path, output_video_fps) temp_video_fps = restrict_video_fps(target_path, output_video_fps)
temp_file_path = get_temp_file_path(target_path) temp_file_path = get_temp_file_path(target_path)
temp_frames_pattern = get_temp_frames_pattern(target_path, '%08d') temp_frames_pattern = get_temp_frames_pattern(target_path, '%08d')
@ -176,7 +179,9 @@ def read_audio_buffer(target_path : str, sample_rate : int, channel_total : int)
return None return None
def restore_audio(target_path : str, output_path : str, output_video_fps : Fps, trim_frame_start : int, trim_frame_end : int) -> bool: def restore_audio(target_path : str, output_path : str, output_video_fps : Fps) -> bool:
trim_frame_start = state_manager.get_item('trim_frame_start')
trim_frame_end = state_manager.get_item('trim_frame_end')
output_audio_encoder = state_manager.get_item('output_audio_encoder') output_audio_encoder = state_manager.get_item('output_audio_encoder')
temp_file_path = get_temp_file_path(target_path) temp_file_path = get_temp_file_path(target_path)
temp_video_duration = detect_video_duration(temp_file_path) temp_video_duration = detect_video_duration(temp_file_path)

View File

@ -7,7 +7,6 @@ from typing import List, Optional
import filetype import filetype
from facefusion.common_helper import is_windows from facefusion.common_helper import is_windows
from facefusion.typing import File
if is_windows(): if is_windows():
import ctypes import ctypes
@ -127,23 +126,11 @@ def create_directory(directory_path : str) -> bool:
return False return False
def list_directory(directory_path : str) -> Optional[List[File]]: def list_directory(directory_path : str) -> Optional[List[str]]:
if is_directory(directory_path): if is_directory(directory_path):
file_paths = sorted(os.listdir(directory_path)) file_paths = os.listdir(directory_path)
files: List[File] = [] file_paths = [ Path(file_path).stem for file_path in file_paths if not Path(file_path).stem.startswith(('.', '__')) ]
return sorted(file_paths)
for file_path in file_paths:
file_name, file_extension = os.path.splitext(file_path)
if not file_name.startswith(('.', '__')):
files.append(
{
'name': file_name,
'extension': file_extension,
'path': os.path.join(directory_path, file_path)
})
return files
return None return None

View File

@ -5,9 +5,9 @@ from onnxruntime import InferenceSession
from facefusion import process_manager, state_manager from facefusion import process_manager, state_manager
from facefusion.app_context import detect_app_context from facefusion.app_context import detect_app_context
from facefusion.execution import create_inference_execution_providers from facefusion.execution import create_execution_providers
from facefusion.thread_helper import thread_lock from facefusion.thread_helper import thread_lock
from facefusion.typing import DownloadSet, ExecutionProvider, InferencePool, InferencePoolSet from facefusion.typing import DownloadSet, ExecutionProviderKey, InferencePool, InferencePoolSet
INFERENCE_POOLS : InferencePoolSet =\ INFERENCE_POOLS : InferencePoolSet =\
{ {
@ -35,11 +35,11 @@ def get_inference_pool(model_context : str, model_sources : DownloadSet) -> Infe
return INFERENCE_POOLS.get(app_context).get(inference_context) return INFERENCE_POOLS.get(app_context).get(inference_context)
def create_inference_pool(model_sources : DownloadSet, execution_device_id : str, execution_providers : List[ExecutionProvider]) -> InferencePool: def create_inference_pool(model_sources : DownloadSet, execution_device_id : str, execution_provider_keys : List[ExecutionProviderKey]) -> InferencePool:
inference_pool : InferencePool = {} inference_pool : InferencePool = {}
for model_name in model_sources.keys(): for model_name in model_sources.keys():
inference_pool[model_name] = create_inference_session(model_sources.get(model_name).get('path'), execution_device_id, execution_providers) inference_pool[model_name] = create_inference_session(model_sources.get(model_name).get('path'), execution_device_id, execution_provider_keys)
return inference_pool return inference_pool
@ -53,9 +53,9 @@ def clear_inference_pool(model_context : str) -> None:
del INFERENCE_POOLS[app_context][inference_context] del INFERENCE_POOLS[app_context][inference_context]
def create_inference_session(model_path : str, execution_device_id : str, execution_providers : List[ExecutionProvider]) -> InferenceSession: def create_inference_session(model_path : str, execution_device_id : str, execution_provider_keys : List[ExecutionProviderKey]) -> InferenceSession:
inference_execution_providers = create_inference_execution_providers(execution_device_id, execution_providers) execution_providers = create_execution_providers(execution_device_id, execution_provider_keys)
return InferenceSession(model_path, providers = inference_execution_providers) return InferenceSession(model_path, providers = execution_providers)
def get_inference_context(model_context : str) -> str: def get_inference_context(model_context : str) -> str:

View File

@ -2,7 +2,7 @@ import os
from copy import copy from copy import copy
from typing import List, Optional from typing import List, Optional
import facefusion.choices from facefusion.choices import job_statuses
from facefusion.date_helper import get_current_date_time from facefusion.date_helper import get_current_date_time
from facefusion.filesystem import create_directory, is_directory, is_file, move_file, remove_directory, remove_file, resolve_file_pattern from facefusion.filesystem import create_directory, is_directory, is_file, move_file, remove_directory, remove_file, resolve_file_pattern
from facefusion.jobs.job_helper import get_step_output_path from facefusion.jobs.job_helper import get_step_output_path
@ -16,7 +16,7 @@ def init_jobs(jobs_path : str) -> bool:
global JOBS_PATH global JOBS_PATH
JOBS_PATH = jobs_path JOBS_PATH = jobs_path
job_status_paths = [ os.path.join(JOBS_PATH, job_status) for job_status in facefusion.choices.job_statuses ] job_status_paths = [ os.path.join(JOBS_PATH, job_status) for job_status in job_statuses ]
for job_status_path in job_status_paths: for job_status_path in job_status_paths:
create_directory(job_status_path) create_directory(job_status_path)
@ -245,7 +245,7 @@ def find_job_path(job_id : str) -> Optional[str]:
job_file_name = get_job_file_name(job_id) job_file_name = get_job_file_name(job_id)
if job_file_name: if job_file_name:
for job_status in facefusion.choices.job_statuses: for job_status in job_statuses:
job_pattern = os.path.join(JOBS_PATH, job_status, job_file_name) job_pattern = os.path.join(JOBS_PATH, job_status, job_file_name)
job_paths = resolve_file_pattern(job_pattern) job_paths = resolve_file_pattern(job_pattern)

View File

@ -1,14 +1,14 @@
from logging import Logger, basicConfig, getLogger from logging import Logger, basicConfig, getLogger
from typing import Tuple from typing import Tuple
import facefusion.choices from facefusion.choices import log_level_set
from facefusion.common_helper import get_first, get_last from facefusion.common_helper import get_first, get_last
from facefusion.typing import LogLevel, TableContents, TableHeaders from facefusion.typing import LogLevel, TableContents, TableHeaders
def init(log_level : LogLevel) -> None: def init(log_level : LogLevel) -> None:
basicConfig(format = '%(message)s') basicConfig(format = '%(message)s')
get_package_logger().setLevel(facefusion.choices.log_level_set.get(log_level)) get_package_logger().setLevel(log_level_set.get(log_level))
def get_package_logger() -> Logger: def get_package_logger() -> Logger:

View File

@ -4,7 +4,7 @@ METADATA =\
{ {
'name': 'FaceFusion', 'name': 'FaceFusion',
'description': 'Industry leading face manipulation platform', 'description': 'Industry leading face manipulation platform',
'version': '3.1.0', 'version': 'NEXT',
'license': 'MIT', 'license': 'MIT',
'author': 'Henry Ruhs', 'author': 'Henry Ruhs',
'url': 'https://facefusion.io' 'url': 'https://facefusion.io'

View File

@ -1,8 +1,7 @@
from typing import List, Sequence from typing import List, Sequence
from facefusion.common_helper import create_float_range, create_int_range from facefusion.common_helper import create_float_range, create_int_range
from facefusion.filesystem import list_directory, resolve_relative_path from facefusion.processors.typing import AgeModifierModel, DeepSwapperModel, ExpressionRestorerModel, FaceDebuggerItem, FaceEditorModel, FaceEnhancerModel, FaceSwapperSet, FrameColorizerModel, FrameEnhancerModel, LipSyncerModel
from facefusion.processors.typing import AgeModifierModel, DeepSwapperModel, ExpressionRestorerModel, FaceDebuggerItem, FaceEditorModel, FaceEnhancerModel, FaceSwapperModel, FaceSwapperSet, FrameColorizerModel, FrameEnhancerModel, LipSyncerModel
age_modifier_models : List[AgeModifierModel] = [ 'styleganex_age' ] age_modifier_models : List[AgeModifierModel] = [ 'styleganex_age' ]
deep_swapper_models : List[DeepSwapperModel] =\ deep_swapper_models : List[DeepSwapperModel] =\
@ -20,7 +19,6 @@ deep_swapper_models : List[DeepSwapperModel] =\
'druuzil/benjamin_affleck_320', 'druuzil/benjamin_affleck_320',
'druuzil/benjamin_stiller_384', 'druuzil/benjamin_stiller_384',
'druuzil/bradley_pitt_224', 'druuzil/bradley_pitt_224',
'druuzil/brie_larson_384',
'druuzil/bryan_cranston_320', 'druuzil/bryan_cranston_320',
'druuzil/catherine_blanchett_352', 'druuzil/catherine_blanchett_352',
'druuzil/christian_bale_320', 'druuzil/christian_bale_320',
@ -63,7 +61,6 @@ deep_swapper_models : List[DeepSwapperModel] =\
'druuzil/lili_reinhart_320', 'druuzil/lili_reinhart_320',
'druuzil/mads_mikkelsen_384', 'druuzil/mads_mikkelsen_384',
'druuzil/mary_winstead_320', 'druuzil/mary_winstead_320',
'druuzil/margaret_qualley_384',
'druuzil/melina_juergens_320', 'druuzil/melina_juergens_320',
'druuzil/michael_fassbender_320', 'druuzil/michael_fassbender_320',
'druuzil/michael_fox_320', 'druuzil/michael_fox_320',
@ -156,15 +153,6 @@ deep_swapper_models : List[DeepSwapperModel] =\
'rumateus/sophie_turner_224', 'rumateus/sophie_turner_224',
'rumateus/taylor_swift_224' 'rumateus/taylor_swift_224'
] ]
custom_model_files = list_directory(resolve_relative_path('../.assets/models/custom'))
if custom_model_files:
for model_file in custom_model_files:
model_id = '/'.join([ 'custom', model_file.get('name') ])
deep_swapper_models.append(model_id)
expression_restorer_models : List[ExpressionRestorerModel] = [ 'live_portrait' ] expression_restorer_models : List[ExpressionRestorerModel] = [ 'live_portrait' ]
face_debugger_items : List[FaceDebuggerItem] = [ 'bounding-box', 'face-landmark-5', 'face-landmark-5/68', 'face-landmark-68', 'face-landmark-68/5', 'face-mask', 'face-detector-score', 'face-landmarker-score', 'age', 'gender', 'race' ] face_debugger_items : List[FaceDebuggerItem] = [ 'bounding-box', 'face-landmark-5', 'face-landmark-5/68', 'face-landmark-68', 'face-landmark-68/5', 'face-mask', 'face-detector-score', 'face-landmarker-score', 'age', 'gender', 'race' ]
face_editor_models : List[FaceEditorModel] = [ 'live_portrait' ] face_editor_models : List[FaceEditorModel] = [ 'live_portrait' ]
@ -182,7 +170,6 @@ face_swapper_set : FaceSwapperSet =\
'simswap_unofficial_512': [ '512x512', '768x768', '1024x1024' ], 'simswap_unofficial_512': [ '512x512', '768x768', '1024x1024' ],
'uniface_256': [ '256x256', '512x512', '768x768', '1024x1024' ] 'uniface_256': [ '256x256', '512x512', '768x768', '1024x1024' ]
} }
face_swapper_models : List[FaceSwapperModel] = list(face_swapper_set.keys())
frame_colorizer_models : List[FrameColorizerModel] = [ 'ddcolor', 'ddcolor_artistic', 'deoldify', 'deoldify_artistic', 'deoldify_stable' ] frame_colorizer_models : List[FrameColorizerModel] = [ 'ddcolor', 'ddcolor_artistic', 'deoldify', 'deoldify_artistic', 'deoldify_stable' ]
frame_colorizer_sizes : List[str] = [ '192x192', '256x256', '384x384', '512x512' ] frame_colorizer_sizes : List[str] = [ '192x192', '256x256', '384x384', '512x512' ]
frame_enhancer_models : List[FrameEnhancerModel] = [ 'clear_reality_x4', 'lsdir_x4', 'nomos8k_sc_x4', 'real_esrgan_x2', 'real_esrgan_x2_fp16', 'real_esrgan_x4', 'real_esrgan_x4_fp16', 'real_esrgan_x8', 'real_esrgan_x8_fp16', 'real_hatgan_x4', 'real_web_photo_x4', 'realistic_rescaler_x4', 'remacri_x4', 'siax_x4', 'span_kendata_x4', 'swin2_sr_x4', 'ultra_sharp_x4' ] frame_enhancer_models : List[FrameEnhancerModel] = [ 'clear_reality_x4', 'lsdir_x4', 'nomos8k_sc_x4', 'real_esrgan_x2', 'real_esrgan_x2_fp16', 'real_esrgan_x4', 'real_esrgan_x4_fp16', 'real_esrgan_x8', 'real_esrgan_x8_fp16', 'real_hatgan_x4', 'real_web_photo_x4', 'realistic_rescaler_x4', 'remacri_x4', 'siax_x4', 'span_kendata_x4', 'swin2_sr_x4', 'ultra_sharp_x4' ]

View File

@ -5,11 +5,11 @@ from typing import List
import cv2 import cv2
import numpy import numpy
import facefusion.choices
import facefusion.jobs.job_manager import facefusion.jobs.job_manager
import facefusion.jobs.job_store import facefusion.jobs.job_store
import facefusion.processors.core as processors import facefusion.processors.core as processors
from facefusion import config, content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, inference_manager, logger, process_manager, state_manager, wording from facefusion import config, content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, inference_manager, logger, process_manager, state_manager, wording
from facefusion.choices import execution_provider_set
from facefusion.common_helper import create_int_metavar from facefusion.common_helper import create_int_metavar
from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url
from facefusion.execution import has_execution_provider from facefusion.execution import has_execution_provider
@ -65,11 +65,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('age_modifier_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('age_modifier_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:
@ -161,7 +163,7 @@ def forward(crop_vision_frame : VisionFrame, extend_vision_frame : VisionFrame,
age_modifier_inputs = {} age_modifier_inputs = {}
if has_execution_provider('coreml'): if has_execution_provider('coreml'):
age_modifier.set_providers([ facefusion.choices.execution_provider_set.get('cpu') ]) age_modifier.set_providers([ execution_provider_set.get('cpu') ])
for age_modifier_input in age_modifier.get_inputs(): for age_modifier_input in age_modifier.get_inputs():
if age_modifier_input.name == 'target': if age_modifier_input.name == 'target':

View File

@ -4,7 +4,6 @@ from typing import List, Tuple
import cv2 import cv2
import numpy import numpy
from cv2.typing import Size
import facefusion.jobs.job_manager import facefusion.jobs.job_manager
import facefusion.jobs.job_store import facefusion.jobs.job_store
@ -14,10 +13,10 @@ from facefusion.common_helper import create_int_metavar
from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url_by_provider from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url_by_provider
from facefusion.face_analyser import get_many_faces, get_one_face from facefusion.face_analyser import get_many_faces, get_one_face
from facefusion.face_helper import paste_back, warp_face_by_face_landmark_5 from facefusion.face_helper import paste_back, warp_face_by_face_landmark_5
from facefusion.face_masker import create_occlusion_mask, create_region_mask, create_static_box_mask from facefusion.face_masker import create_occlusion_mask, create_static_box_mask
from facefusion.face_selector import find_similar_faces, sort_and_filter_faces from facefusion.face_selector import find_similar_faces, sort_and_filter_faces
from facefusion.face_store import get_reference_faces from facefusion.face_store import get_reference_faces
from facefusion.filesystem import in_directory, is_image, is_video, list_directory, resolve_relative_path, same_file_extension from facefusion.filesystem import in_directory, is_image, is_video, resolve_relative_path, same_file_extension
from facefusion.processors import choices as processors_choices from facefusion.processors import choices as processors_choices
from facefusion.processors.typing import DeepSwapperInputs, DeepSwapperMorph from facefusion.processors.typing import DeepSwapperInputs, DeepSwapperMorph
from facefusion.program_helper import find_argument_group from facefusion.program_helper import find_argument_group
@ -33,167 +32,165 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
if download_scope == 'full': if download_scope == 'full':
model_config.extend( model_config.extend(
[ [
('druuzil', 'adrianne_palicki_384'), ('druuzil', 'adrianne_palicki_384', (384, 384)),
('druuzil', 'agnetha_falskog_224'), ('druuzil', 'agnetha_falskog_224', (224, 224)),
('druuzil', 'alan_ritchson_320'), ('druuzil', 'alan_ritchson_320', (320, 320)),
('druuzil', 'alicia_vikander_320'), ('druuzil', 'alicia_vikander_320', (320, 320)),
('druuzil', 'amber_midthunder_320'), ('druuzil', 'amber_midthunder_320', (320, 320)),
('druuzil', 'andras_arato_384'), ('druuzil', 'andras_arato_384', (384, 384)),
('druuzil', 'andrew_tate_320'), ('druuzil', 'andrew_tate_320', (320, 320)),
('druuzil', 'anne_hathaway_320'), ('druuzil', 'anne_hathaway_320', (320, 320)),
('druuzil', 'anya_chalotra_320'), ('druuzil', 'anya_chalotra_320', (320, 320)),
('druuzil', 'arnold_schwarzenegger_320'), ('druuzil', 'arnold_schwarzenegger_320', (320, 320)),
('druuzil', 'benjamin_affleck_320'), ('druuzil', 'benjamin_affleck_320', (320, 320)),
('druuzil', 'benjamin_stiller_384'), ('druuzil', 'benjamin_stiller_384', (384, 384)),
('druuzil', 'bradley_pitt_224'), ('druuzil', 'bradley_pitt_224', (224, 224)),
('druuzil', 'brie_larson_384'), ('druuzil', 'bryan_cranston_320', (320, 320)),
('druuzil', 'bryan_cranston_320'), ('druuzil', 'catherine_blanchett_352', (352, 352)),
('druuzil', 'catherine_blanchett_352'), ('druuzil', 'christian_bale_320', (320, 320)),
('druuzil', 'christian_bale_320'), ('druuzil', 'christopher_hemsworth_320', (320, 320)),
('druuzil', 'christopher_hemsworth_320'), ('druuzil', 'christoph_waltz_384', (384, 384)),
('druuzil', 'christoph_waltz_384'), ('druuzil', 'cillian_murphy_320', (320, 320)),
('druuzil', 'cillian_murphy_320'), ('druuzil', 'cobie_smulders_256', (256, 256)),
('druuzil', 'cobie_smulders_256'), ('druuzil', 'dwayne_johnson_384', (384, 384)),
('druuzil', 'dwayne_johnson_384'), ('druuzil', 'edward_norton_320', (320, 320)),
('druuzil', 'edward_norton_320'), ('druuzil', 'elisabeth_shue_320', (320, 320)),
('druuzil', 'elisabeth_shue_320'), ('druuzil', 'elizabeth_olsen_384', (384, 384)),
('druuzil', 'elizabeth_olsen_384'), ('druuzil', 'elon_musk_320', (320, 320)),
('druuzil', 'elon_musk_320'), ('druuzil', 'emily_blunt_320', (320, 320)),
('druuzil', 'emily_blunt_320'), ('druuzil', 'emma_stone_384', (384, 384)),
('druuzil', 'emma_stone_384'), ('druuzil', 'emma_watson_320', (320, 320)),
('druuzil', 'emma_watson_320'), ('druuzil', 'erin_moriarty_384', (384, 384)),
('druuzil', 'erin_moriarty_384'), ('druuzil', 'eva_green_320', (320, 320)),
('druuzil', 'eva_green_320'), ('druuzil', 'ewan_mcgregor_320', (320, 320)),
('druuzil', 'ewan_mcgregor_320'), ('druuzil', 'florence_pugh_320', (320, 320)),
('druuzil', 'florence_pugh_320'), ('druuzil', 'freya_allan_320', (320, 320)),
('druuzil', 'freya_allan_320'), ('druuzil', 'gary_cole_224', (224, 224)),
('druuzil', 'gary_cole_224'), ('druuzil', 'gigi_hadid_224', (224, 224)),
('druuzil', 'gigi_hadid_224'), ('druuzil', 'harrison_ford_384', (384, 384)),
('druuzil', 'harrison_ford_384'), ('druuzil', 'hayden_christensen_320', (320, 320)),
('druuzil', 'hayden_christensen_320'), ('druuzil', 'heath_ledger_320', (320, 320)),
('druuzil', 'heath_ledger_320'), ('druuzil', 'henry_cavill_448', (448, 448)),
('druuzil', 'henry_cavill_448'), ('druuzil', 'hugh_jackman_384', (384, 384)),
('druuzil', 'hugh_jackman_384'), ('druuzil', 'idris_elba_320', (320, 320)),
('druuzil', 'idris_elba_320'), ('druuzil', 'jack_nicholson_320', (320, 320)),
('druuzil', 'jack_nicholson_320'), ('druuzil', 'james_mcavoy_320', (320, 320)),
('druuzil', 'james_mcavoy_320'), ('druuzil', 'james_varney_320', (320, 320)),
('druuzil', 'james_varney_320'), ('druuzil', 'jason_momoa_320', (320, 320)),
('druuzil', 'jason_momoa_320'), ('druuzil', 'jason_statham_320', (320, 320)),
('druuzil', 'jason_statham_320'), ('druuzil', 'jennifer_connelly_384', (384, 384)),
('druuzil', 'jennifer_connelly_384'), ('druuzil', 'jimmy_donaldson_320', (320, 320)),
('druuzil', 'jimmy_donaldson_320'), ('druuzil', 'jordan_peterson_384', (384, 384)),
('druuzil', 'jordan_peterson_384'), ('druuzil', 'karl_urban_224', (224, 224)),
('druuzil', 'karl_urban_224'), ('druuzil', 'kate_beckinsale_384', (384, 384)),
('druuzil', 'kate_beckinsale_384'), ('druuzil', 'laurence_fishburne_384', (384, 384)),
('druuzil', 'laurence_fishburne_384'), ('druuzil', 'lili_reinhart_320', (320, 320)),
('druuzil', 'lili_reinhart_320'), ('druuzil', 'mads_mikkelsen_384', (384, 384)),
('druuzil', 'mads_mikkelsen_384'), ('druuzil', 'mary_winstead_320', (320, 320)),
('druuzil', 'mary_winstead_320'), ('druuzil', 'melina_juergens_320', (320, 320)),
('druuzil', 'margaret_qualley_384'), ('druuzil', 'michael_fassbender_320', (320, 320)),
('druuzil', 'melina_juergens_320'), ('druuzil', 'michael_fox_320', (320, 320)),
('druuzil', 'michael_fassbender_320'), ('druuzil', 'millie_bobby_brown_320', (320, 320)),
('druuzil', 'michael_fox_320'), ('druuzil', 'morgan_freeman_320', (320, 320)),
('druuzil', 'millie_bobby_brown_320'), ('druuzil', 'patrick_stewart_320', (320, 320)),
('druuzil', 'morgan_freeman_320'), ('druuzil', 'rebecca_ferguson_320', (320, 320)),
('druuzil', 'patrick_stewart_320'), ('druuzil', 'scarlett_johansson_320', (320, 320)),
('druuzil', 'rebecca_ferguson_320'), ('druuzil', 'seth_macfarlane_384', (384, 384)),
('druuzil', 'scarlett_johansson_320'), ('druuzil', 'thomas_cruise_320', (320, 320)),
('druuzil', 'seth_macfarlane_384'), ('druuzil', 'thomas_hanks_384', (384, 384)),
('druuzil', 'thomas_cruise_320'), ('edel', 'emma_roberts_224', (224, 224)),
('druuzil', 'thomas_hanks_384'), ('edel', 'ivanka_trump_224', (224, 224)),
('edel', 'emma_roberts_224'), ('edel', 'lize_dzjabrailova_224', (224, 224)),
('edel', 'ivanka_trump_224'), ('edel', 'sidney_sweeney_224', (224, 224)),
('edel', 'lize_dzjabrailova_224'), ('edel', 'winona_ryder_224', (224, 224))
('edel', 'sidney_sweeney_224'),
('edel', 'winona_ryder_224')
]) ])
if download_scope in [ 'lite', 'full' ]: if download_scope in [ 'lite', 'full' ]:
model_config.extend( model_config.extend(
[ [
('iperov', 'alexandra_daddario_224'), ('iperov', 'alexandra_daddario_224', (224, 224)),
('iperov', 'alexei_navalny_224'), ('iperov', 'alexei_navalny_224', (224, 224)),
('iperov', 'amber_heard_224'), ('iperov', 'amber_heard_224', (224, 224)),
('iperov', 'dilraba_dilmurat_224'), ('iperov', 'dilraba_dilmurat_224', (224, 224)),
('iperov', 'elon_musk_224'), ('iperov', 'elon_musk_224', (224, 224)),
('iperov', 'emilia_clarke_224'), ('iperov', 'emilia_clarke_224', (224, 224)),
('iperov', 'emma_watson_224'), ('iperov', 'emma_watson_224', (224, 224)),
('iperov', 'erin_moriarty_224'), ('iperov', 'erin_moriarty_224', (224, 224)),
('iperov', 'jackie_chan_224'), ('iperov', 'jackie_chan_224', (224, 224)),
('iperov', 'james_carrey_224'), ('iperov', 'james_carrey_224', (224, 224)),
('iperov', 'jason_statham_320'), ('iperov', 'jason_statham_320', (320, 320)),
('iperov', 'keanu_reeves_320'), ('iperov', 'keanu_reeves_320', (320, 320)),
('iperov', 'margot_robbie_224'), ('iperov', 'margot_robbie_224', (224, 224)),
('iperov', 'natalie_dormer_224'), ('iperov', 'natalie_dormer_224', (224, 224)),
('iperov', 'nicolas_coppola_224'), ('iperov', 'nicolas_coppola_224', (224, 224)),
('iperov', 'robert_downey_224'), ('iperov', 'robert_downey_224', (224, 224)),
('iperov', 'rowan_atkinson_224'), ('iperov', 'rowan_atkinson_224', (224, 224)),
('iperov', 'ryan_reynolds_224'), ('iperov', 'ryan_reynolds_224', (224, 224)),
('iperov', 'scarlett_johansson_224'), ('iperov', 'scarlett_johansson_224', (224, 224)),
('iperov', 'sylvester_stallone_224'), ('iperov', 'sylvester_stallone_224', (224, 224)),
('iperov', 'thomas_cruise_224'), ('iperov', 'thomas_cruise_224', (224, 224)),
('iperov', 'thomas_holland_224'), ('iperov', 'thomas_holland_224', (224, 224)),
('iperov', 'vin_diesel_224'), ('iperov', 'vin_diesel_224', (224, 224)),
('iperov', 'vladimir_putin_224') ('iperov', 'vladimir_putin_224', (224, 224))
]) ])
if download_scope == 'full': if download_scope == 'full':
model_config.extend( model_config.extend(
[ [
('jen', 'angelica_trae_288'), ('jen', 'angelica_trae_288', (288, 288)),
('jen', 'ella_freya_224'), ('jen', 'ella_freya_224', (224, 224)),
('jen', 'emma_myers_320'), ('jen', 'emma_myers_320', (320, 320)),
('jen', 'evie_pickerill_224'), ('jen', 'evie_pickerill_224', (224, 224)),
('jen', 'kang_hyewon_320'), ('jen', 'kang_hyewon_320', (320, 320)),
('jen', 'maddie_mead_224'), ('jen', 'maddie_mead_224', (224, 224)),
('jen', 'nicole_turnbull_288'), ('jen', 'nicole_turnbull_288', (288, 288)),
('mats', 'alica_schmidt_320'), ('mats', 'alica_schmidt_320', (320, 320)),
('mats', 'ashley_alexiss_224'), ('mats', 'ashley_alexiss_224', (224, 224)),
('mats', 'billie_eilish_224'), ('mats', 'billie_eilish_224', (224, 224)),
('mats', 'brie_larson_224'), ('mats', 'brie_larson_224', (224, 224)),
('mats', 'cara_delevingne_224'), ('mats', 'cara_delevingne_224', (224, 224)),
('mats', 'carolin_kebekus_224'), ('mats', 'carolin_kebekus_224', (224, 224)),
('mats', 'chelsea_clinton_224'), ('mats', 'chelsea_clinton_224', (224, 224)),
('mats', 'claire_boucher_224'), ('mats', 'claire_boucher_224', (224, 224)),
('mats', 'corinna_kopf_224'), ('mats', 'corinna_kopf_224', (224, 224)),
('mats', 'florence_pugh_224'), ('mats', 'florence_pugh_224', (224, 224)),
('mats', 'hillary_clinton_224'), ('mats', 'hillary_clinton_224', (224, 224)),
('mats', 'jenna_fischer_224'), ('mats', 'jenna_fischer_224', (224, 224)),
('mats', 'kim_jisoo_320'), ('mats', 'kim_jisoo_320', (320, 320)),
('mats', 'mica_suarez_320'), ('mats', 'mica_suarez_320', (320, 320)),
('mats', 'shailene_woodley_224'), ('mats', 'shailene_woodley_224', (224, 224)),
('mats', 'shraddha_kapoor_320'), ('mats', 'shraddha_kapoor_320', (320, 320)),
('mats', 'yu_jimin_352'), ('mats', 'yu_jimin_352', (352, 352)),
('rumateus', 'alison_brie_224'), ('rumateus', 'alison_brie_224', (224, 224)),
('rumateus', 'amber_heard_224'), ('rumateus', 'amber_heard_224', (224, 224)),
('rumateus', 'angelina_jolie_224'), ('rumateus', 'angelina_jolie_224', (224, 224)),
('rumateus', 'aubrey_plaza_224'), ('rumateus', 'aubrey_plaza_224', (224, 224)),
('rumateus', 'bridget_regan_224'), ('rumateus', 'bridget_regan_224', (224, 224)),
('rumateus', 'cobie_smulders_224'), ('rumateus', 'cobie_smulders_224', (224, 224)),
('rumateus', 'deborah_woll_224'), ('rumateus', 'deborah_woll_224', (224, 224)),
('rumateus', 'dua_lipa_224'), ('rumateus', 'dua_lipa_224', (224, 224)),
('rumateus', 'emma_stone_224'), ('rumateus', 'emma_stone_224', (224, 224)),
('rumateus', 'hailee_steinfeld_224'), ('rumateus', 'hailee_steinfeld_224', (224, 224)),
('rumateus', 'hilary_duff_224'), ('rumateus', 'hilary_duff_224', (224, 224)),
('rumateus', 'jessica_alba_224'), ('rumateus', 'jessica_alba_224', (224, 224)),
('rumateus', 'jessica_biel_224'), ('rumateus', 'jessica_biel_224', (224, 224)),
('rumateus', 'john_cena_224'), ('rumateus', 'john_cena_224', (224, 224)),
('rumateus', 'kim_kardashian_224'), ('rumateus', 'kim_kardashian_224', (224, 224)),
('rumateus', 'kristen_bell_224'), ('rumateus', 'kristen_bell_224', (224, 224)),
('rumateus', 'lucy_liu_224'), ('rumateus', 'lucy_liu_224', (224, 224)),
('rumateus', 'margot_robbie_224'), ('rumateus', 'margot_robbie_224', (224, 224)),
('rumateus', 'megan_fox_224'), ('rumateus', 'megan_fox_224', (224, 224)),
('rumateus', 'meghan_markle_224'), ('rumateus', 'meghan_markle_224', (224, 224)),
('rumateus', 'millie_bobby_brown_224'), ('rumateus', 'millie_bobby_brown_224', (224, 224)),
('rumateus', 'natalie_portman_224'), ('rumateus', 'natalie_portman_224', (224, 224)),
('rumateus', 'nicki_minaj_224'), ('rumateus', 'nicki_minaj_224', (224, 224)),
('rumateus', 'olivia_wilde_224'), ('rumateus', 'olivia_wilde_224', (224, 224)),
('rumateus', 'shay_mitchell_224'), ('rumateus', 'shay_mitchell_224', (224, 224)),
('rumateus', 'sophie_turner_224'), ('rumateus', 'sophie_turner_224', (224, 224)),
('rumateus', 'taylor_swift_224') ('rumateus', 'taylor_swift_224', (224, 224))
]) ])
model_set : ModelSet = {} model_set : ModelSet = {}
for model_scope, model_name in model_config: for model_creator, model_name, model_size in model_config:
model_id = '/'.join([ model_scope, model_name ]) model_id = '/'.join([ model_creator, model_name ])
model_set[model_id] =\ model_set[model_id] =\
{ {
@ -201,50 +198,34 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
{ {
'deep_swapper': 'deep_swapper':
{ {
'url': resolve_download_url_by_provider('huggingface', 'deepfacelive-models-' + model_scope, model_name + '.hash'), 'url': resolve_download_url_by_provider('huggingface', 'deepfacelive-models-' + model_creator, model_name + '.hash'),
'path': resolve_relative_path('../.assets/models/' + model_scope + '/' + model_name + '.hash') 'path': resolve_relative_path('../.assets/models/' + model_creator + '/' + model_name + '.hash')
} }
}, },
'sources': 'sources':
{ {
'deep_swapper': 'deep_swapper':
{ {
'url': resolve_download_url_by_provider('huggingface', 'deepfacelive-models-' + model_scope, model_name + '.dfm'), 'url': resolve_download_url_by_provider('huggingface', 'deepfacelive-models-' + model_creator, model_name + '.dfm'),
'path': resolve_relative_path('../.assets/models/' + model_scope + '/' + model_name + '.dfm') 'path': resolve_relative_path('../.assets/models/' + model_creator + '/' + model_name + '.dfm')
} }
}, },
'template': 'dfl_whole_face' 'template': 'dfl_whole_face',
'size': model_size
} }
custom_model_files = list_directory(resolve_relative_path('../.assets/models/custom'))
if custom_model_files:
for model_file in custom_model_files:
model_id = '/'.join([ 'custom', model_file.get('name') ])
model_set[model_id] =\
{
'sources':
{
'deep_swapper':
{
'path': resolve_relative_path(model_file.get('path'))
}
},
'template': 'dfl_whole_face'
}
return model_set return model_set
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('deep_swapper_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('deep_swapper_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:
@ -252,15 +233,6 @@ def get_model_options() -> ModelOptions:
return create_static_model_set('full').get(deep_swapper_model) return create_static_model_set('full').get(deep_swapper_model)
def get_model_size() -> Size:
deep_swapper = get_inference_pool().get('deep_swapper')
deep_swapper_outputs = deep_swapper.get_outputs()
for deep_swapper_output in deep_swapper_outputs:
return deep_swapper_output.shape[1:3]
return 0, 0
def register_args(program : ArgumentParser) -> None: def register_args(program : ArgumentParser) -> None:
group_processors = find_argument_group(program, 'processors') group_processors = find_argument_group(program, 'processors')
if group_processors: if group_processors:
@ -278,9 +250,7 @@ def pre_check() -> bool:
model_hashes = get_model_options().get('hashes') model_hashes = get_model_options().get('hashes')
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
if model_hashes and model_sources: return conditional_download_hashes(model_hashes) and conditional_download_sources(model_sources)
return conditional_download_hashes(model_hashes) and conditional_download_sources(model_sources)
return True
def pre_process(mode : ProcessMode) -> bool: def pre_process(mode : ProcessMode) -> bool:
@ -311,7 +281,7 @@ def post_process() -> None:
def swap_face(target_face : Face, temp_vision_frame : VisionFrame) -> VisionFrame: def swap_face(target_face : Face, temp_vision_frame : VisionFrame) -> VisionFrame:
model_template = get_model_options().get('template') model_template = get_model_options().get('template')
model_size = get_model_size() model_size = get_model_options().get('size')
crop_vision_frame, affine_matrix = warp_face_by_face_landmark_5(temp_vision_frame, target_face.landmark_set.get('5/68'), model_template, model_size) crop_vision_frame, affine_matrix = warp_face_by_face_landmark_5(temp_vision_frame, target_face.landmark_set.get('5/68'), model_template, model_size)
crop_vision_frame_raw = crop_vision_frame.copy() crop_vision_frame_raw = crop_vision_frame.copy()
box_mask = create_static_box_mask(crop_vision_frame.shape[:2][::-1], state_manager.get_item('face_mask_blur'), state_manager.get_item('face_mask_padding')) box_mask = create_static_box_mask(crop_vision_frame.shape[:2][::-1], state_manager.get_item('face_mask_blur'), state_manager.get_item('face_mask_padding'))
@ -330,11 +300,6 @@ def swap_face(target_face : Face, temp_vision_frame : VisionFrame) -> VisionFram
crop_vision_frame = normalize_crop_frame(crop_vision_frame) crop_vision_frame = normalize_crop_frame(crop_vision_frame)
crop_vision_frame = conditional_match_frame_color(crop_vision_frame_raw, crop_vision_frame) crop_vision_frame = conditional_match_frame_color(crop_vision_frame_raw, crop_vision_frame)
crop_masks.append(prepare_crop_mask(crop_source_mask, crop_target_mask)) crop_masks.append(prepare_crop_mask(crop_source_mask, crop_target_mask))
if 'region' in state_manager.get_item('face_mask_types'):
region_mask = create_region_mask(crop_vision_frame, state_manager.get_item('face_mask_regions'))
crop_masks.append(region_mask)
crop_mask = numpy.minimum.reduce(crop_masks).clip(0, 1) crop_mask = numpy.minimum.reduce(crop_masks).clip(0, 1)
paste_vision_frame = paste_back(temp_vision_frame, crop_vision_frame, crop_mask, affine_matrix) paste_vision_frame = paste_back(temp_vision_frame, crop_vision_frame, crop_mask, affine_matrix)
return paste_vision_frame return paste_vision_frame
@ -379,7 +344,7 @@ def normalize_crop_frame(crop_vision_frame : VisionFrame) -> VisionFrame:
def prepare_crop_mask(crop_source_mask : Mask, crop_target_mask : Mask) -> Mask: def prepare_crop_mask(crop_source_mask : Mask, crop_target_mask : Mask) -> Mask:
model_size = get_model_size() model_size = get_model_options().get('size')
blur_size = 6.25 blur_size = 6.25
kernel_size = 3 kernel_size = 3
crop_mask = numpy.minimum.reduce([ crop_source_mask, crop_target_mask ]) crop_mask = numpy.minimum.reduce([ crop_source_mask, crop_target_mask ])

View File

@ -77,7 +77,8 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('expression_restorer_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:

View File

@ -106,11 +106,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('face_editor_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('face_editor_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:

View File

@ -223,11 +223,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('face_enhancer_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('face_enhancer_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:

View File

@ -4,11 +4,11 @@ from typing import List, Tuple
import numpy import numpy
import facefusion.choices
import facefusion.jobs.job_manager import facefusion.jobs.job_manager
import facefusion.jobs.job_store import facefusion.jobs.job_store
import facefusion.processors.core as processors import facefusion.processors.core as processors
from facefusion import config, content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, inference_manager, logger, process_manager, state_manager, wording from facefusion import config, content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, inference_manager, logger, process_manager, state_manager, wording
from facefusion.choices import execution_provider_set
from facefusion.common_helper import get_first from facefusion.common_helper import get_first
from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url from facefusion.download import conditional_download_hashes, conditional_download_sources, resolve_download_url
from facefusion.execution import has_execution_provider from facefusion.execution import has_execution_provider
@ -337,11 +337,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('face_swapper_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('face_swapper_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:
@ -352,7 +354,7 @@ def get_model_options() -> ModelOptions:
def register_args(program : ArgumentParser) -> None: def register_args(program : ArgumentParser) -> None:
group_processors = find_argument_group(program, 'processors') group_processors = find_argument_group(program, 'processors')
if group_processors: if group_processors:
group_processors.add_argument('--face-swapper-model', help = wording.get('help.face_swapper_model'), default = config.get_str_value('processors.face_swapper_model', 'inswapper_128_fp16'), choices = processors_choices.face_swapper_models) group_processors.add_argument('--face-swapper-model', help = wording.get('help.face_swapper_model'), default = config.get_str_value('processors.face_swapper_model', 'inswapper_128_fp16'), choices = processors_choices.face_swapper_set.keys())
known_args, _ = program.parse_known_args() known_args, _ = program.parse_known_args()
face_swapper_pixel_boost_choices = processors_choices.face_swapper_set.get(known_args.face_swapper_model) face_swapper_pixel_boost_choices = processors_choices.face_swapper_set.get(known_args.face_swapper_model)
group_processors.add_argument('--face-swapper-pixel-boost', help = wording.get('help.face_swapper_pixel_boost'), default = config.get_str_value('processors.face_swapper_pixel_boost', get_first(face_swapper_pixel_boost_choices)), choices = face_swapper_pixel_boost_choices) group_processors.add_argument('--face-swapper-pixel-boost', help = wording.get('help.face_swapper_pixel_boost'), default = config.get_str_value('processors.face_swapper_pixel_boost', get_first(face_swapper_pixel_boost_choices)), choices = face_swapper_pixel_boost_choices)
@ -447,7 +449,7 @@ def forward_swap_face(source_face : Face, crop_vision_frame : VisionFrame) -> Vi
face_swapper_inputs = {} face_swapper_inputs = {}
if has_execution_provider('coreml') and model_type in [ 'ghost', 'uniface' ]: if has_execution_provider('coreml') and model_type in [ 'ghost', 'uniface' ]:
face_swapper.set_providers([ facefusion.choices.execution_provider_set.get('cpu') ]) face_swapper.set_providers([ execution_provider_set.get('cpu') ])
for face_swapper_input in face_swapper.get_inputs(): for face_swapper_input in face_swapper.get_inputs():
if face_swapper_input.name == 'source': if face_swapper_input.name == 'source':

View File

@ -129,11 +129,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('frame_colorizer_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('frame_colorizer_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:

View File

@ -386,11 +386,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('frame_enhancer_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('frame_enhancer_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:

View File

@ -75,11 +75,13 @@ def create_static_model_set(download_scope : DownloadScope) -> ModelSet:
def get_inference_pool() -> InferencePool: def get_inference_pool() -> InferencePool:
model_sources = get_model_options().get('sources') model_sources = get_model_options().get('sources')
return inference_manager.get_inference_pool(__name__, model_sources) model_context = __name__ + '.' + state_manager.get_item('lip_syncer_model')
return inference_manager.get_inference_pool(model_context, model_sources)
def clear_inference_pool() -> None: def clear_inference_pool() -> None:
inference_manager.clear_inference_pool(__name__) model_context = __name__ + '.' + state_manager.get_item('lip_syncer_model')
inference_manager.clear_inference_pool(model_context)
def get_model_options() -> ModelOptions: def get_model_options() -> ModelOptions:

View File

@ -5,7 +5,155 @@ from numpy._typing import NDArray
from facefusion.typing import AppContext, AudioFrame, Face, FaceSet, VisionFrame from facefusion.typing import AppContext, AudioFrame, Face, FaceSet, VisionFrame
AgeModifierModel = Literal['styleganex_age'] AgeModifierModel = Literal['styleganex_age']
DeepSwapperModel = str DeepSwapperModel = Literal\
[
'druuzil/adrianne_palicki_384',
'druuzil/agnetha_falskog_224',
'druuzil/alan_ritchson_320',
'druuzil/alicia_vikander_320',
'druuzil/amber_midthunder_320',
'druuzil/andras_arato_384',
'druuzil/andrew_tate_320',
'druuzil/anne_hathaway_320',
'druuzil/anya_chalotra_320',
'druuzil/arnold_schwarzenegger_320',
'druuzil/benjamin_affleck_320',
'druuzil/benjamin_stiller_384',
'druuzil/bradley_pitt_224',
'druuzil/bryan_cranston_320',
'druuzil/catherine_blanchett_352',
'druuzil/christian_bale_320',
'druuzil/christopher_hemsworth_320',
'druuzil/christoph_waltz_384',
'druuzil/cillian_murphy_320',
'druuzil/cobie_smulders_256',
'druuzil/dwayne_johnson_384',
'druuzil/edward_norton_320',
'druuzil/elisabeth_shue_320',
'druuzil/elizabeth_olsen_384',
'druuzil/elon_musk_320',
'druuzil/emily_blunt_320',
'druuzil/emma_stone_384',
'druuzil/emma_watson_320',
'druuzil/erin_moriarty_384',
'druuzil/eva_green_320',
'druuzil/ewan_mcgregor_320',
'druuzil/florence_pugh_320',
'druuzil/freya_allan_320',
'druuzil/gary_cole_224',
'druuzil/gigi_hadid_224',
'druuzil/harrison_ford_384',
'druuzil/hayden_christensen_320',
'druuzil/heath_ledger_320',
'druuzil/henry_cavill_448',
'druuzil/hugh_jackman_384',
'druuzil/idris_elba_320',
'druuzil/jack_nicholson_320',
'druuzil/james_mcavoy_320',
'druuzil/james_varney_320',
'druuzil/jason_momoa_320',
'druuzil/jason_statham_320',
'druuzil/jennifer_connelly_384',
'druuzil/jimmy_donaldson_320',
'druuzil/jordan_peterson_384',
'druuzil/karl_urban_224',
'druuzil/kate_beckinsale_384',
'druuzil/laurence_fishburne_384',
'druuzil/lili_reinhart_320',
'druuzil/mads_mikkelsen_384',
'druuzil/mary_winstead_320',
'druuzil/melina_juergens_320',
'druuzil/michael_fassbender_320',
'druuzil/michael_fox_320',
'druuzil/millie_bobby_brown_320',
'druuzil/morgan_freeman_320',
'druuzil/patrick_stewart_320',
'druuzil/rebecca_ferguson_320',
'druuzil/scarlett_johansson_320',
'druuzil/seth_macfarlane_384',
'druuzil/thomas_cruise_320',
'druuzil/thomas_hanks_384',
'edel/emma_roberts_224',
'edel/ivanka_trump_224',
'edel/lize_dzjabrailova_224',
'edel/sidney_sweeney_224',
'edel/winona_ryder_224',
'iperov/alexandra_daddario_224',
'iperov/alexei_navalny_224',
'iperov/amber_heard_224',
'iperov/dilraba_dilmurat_224',
'iperov/elon_musk_224',
'iperov/emilia_clarke_224',
'iperov/emma_watson_224',
'iperov/erin_moriarty_224',
'iperov/jackie_chan_224',
'iperov/james_carrey_224',
'iperov/jason_statham_320',
'iperov/keanu_reeves_320',
'iperov/margot_robbie_224',
'iperov/natalie_dormer_224',
'iperov/nicolas_coppola_224',
'iperov/robert_downey_224',
'iperov/rowan_atkinson_224',
'iperov/ryan_reynolds_224',
'iperov/scarlett_johansson_224',
'iperov/sylvester_stallone_224',
'iperov/thomas_cruise_224',
'iperov/thomas_holland_224',
'iperov/vin_diesel_224',
'iperov/vladimir_putin_224',
'jen/angelica_trae_288',
'jen/ella_freya_224',
'jen/emma_myers_320',
'jen/evie_pickerill_224',
'jen/kang_hyewon_320',
'jen/maddie_mead_224',
'jen/nicole_turnbull_288',
'mats/alica_schmidt_320',
'mats/ashley_alexiss_224',
'mats/billie_eilish_224',
'mats/brie_larson_224',
'mats/cara_delevingne_224',
'mats/carolin_kebekus_224',
'mats/chelsea_clinton_224',
'mats/claire_boucher_224',
'mats/corinna_kopf_224',
'mats/florence_pugh_224',
'mats/hillary_clinton_224',
'mats/jenna_fischer_224',
'mats/kim_jisoo_320',
'mats/mica_suarez_320',
'mats/shailene_woodley_224',
'mats/shraddha_kapoor_320',
'mats/yu_jimin_352',
'rumateus/alison_brie_224',
'rumateus/amber_heard_224',
'rumateus/angelina_jolie_224',
'rumateus/aubrey_plaza_224',
'rumateus/bridget_regan_224',
'rumateus/cobie_smulders_224',
'rumateus/deborah_woll_224',
'rumateus/dua_lipa_224',
'rumateus/emma_stone_224',
'rumateus/hailee_steinfeld_224',
'rumateus/hilary_duff_224',
'rumateus/jessica_alba_224',
'rumateus/jessica_biel_224',
'rumateus/john_cena_224',
'rumateus/kim_kardashian_224',
'rumateus/kristen_bell_224',
'rumateus/lucy_liu_224',
'rumateus/margot_robbie_224',
'rumateus/megan_fox_224',
'rumateus/meghan_markle_224',
'rumateus/millie_bobby_brown_224',
'rumateus/natalie_portman_224',
'rumateus/nicki_minaj_224',
'rumateus/olivia_wilde_224',
'rumateus/shay_mitchell_224',
'rumateus/sophie_turner_224',
'rumateus/taylor_swift_224'
]
ExpressionRestorerModel = Literal['live_portrait'] ExpressionRestorerModel = Literal['live_portrait']
FaceDebuggerItem = Literal['bounding-box', 'face-landmark-5', 'face-landmark-5/68', 'face-landmark-68', 'face-landmark-68/5', 'face-mask', 'face-detector-score', 'face-landmarker-score', 'age', 'gender', 'race'] FaceDebuggerItem = Literal['bounding-box', 'face-landmark-5', 'face-landmark-5/68', 'face-landmark-68', 'face-landmark-68/5', 'face-mask', 'face-detector-score', 'face-landmarker-score', 'age', 'gender', 'race']
FaceEditorModel = Literal['live_portrait'] FaceEditorModel = Literal['live_portrait']

View File

@ -4,7 +4,7 @@ from argparse import ArgumentParser, HelpFormatter
import facefusion.choices import facefusion.choices
from facefusion import config, metadata, state_manager, wording from facefusion import config, metadata, state_manager, wording
from facefusion.common_helper import create_float_metavar, create_int_metavar, get_last from facefusion.common_helper import create_float_metavar, create_int_metavar, get_last
from facefusion.execution import get_available_execution_providers from facefusion.execution import get_execution_provider_set
from facefusion.filesystem import list_directory from facefusion.filesystem import list_directory
from facefusion.jobs import job_store from facefusion.jobs import job_store
from facefusion.processors.core import get_processors_modules from facefusion.processors.core import get_processors_modules
@ -94,7 +94,7 @@ def create_output_pattern_program() -> ArgumentParser:
def create_face_detector_program() -> ArgumentParser: def create_face_detector_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
group_face_detector = program.add_argument_group('face detector') group_face_detector = program.add_argument_group('face detector')
group_face_detector.add_argument('--face-detector-model', help = wording.get('help.face_detector_model'), default = config.get_str_value('face_detector.face_detector_model', 'yoloface'), choices = facefusion.choices.face_detector_models) group_face_detector.add_argument('--face-detector-model', help = wording.get('help.face_detector_model'), default = config.get_str_value('face_detector.face_detector_model', 'yoloface'), choices = list(facefusion.choices.face_detector_set.keys()))
known_args, _ = program.parse_known_args() known_args, _ = program.parse_known_args()
face_detector_size_choices = facefusion.choices.face_detector_set.get(known_args.face_detector_model) face_detector_size_choices = facefusion.choices.face_detector_set.get(known_args.face_detector_model)
group_face_detector.add_argument('--face-detector-size', help = wording.get('help.face_detector_size'), default = config.get_str_value('face_detector.face_detector_size', get_last(face_detector_size_choices)), choices = face_detector_size_choices) group_face_detector.add_argument('--face-detector-size', help = wording.get('help.face_detector_size'), default = config.get_str_value('face_detector.face_detector_size', get_last(face_detector_size_choices)), choices = face_detector_size_choices)
@ -132,13 +132,11 @@ def create_face_selector_program() -> ArgumentParser:
def create_face_masker_program() -> ArgumentParser: def create_face_masker_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
group_face_masker = program.add_argument_group('face masker') group_face_masker = program.add_argument_group('face masker')
group_face_masker.add_argument('--face-occluder-model', help = wording.get('help.face_occluder_model'), default = config.get_str_value('face_detector.face_occluder_model', 'xseg_1'), choices = facefusion.choices.face_occluder_models)
group_face_masker.add_argument('--face-parser-model', help = wording.get('help.face_parser_model'), default = config.get_str_value('face_detector.face_parser_model', 'bisenet_resnet_34'), choices = facefusion.choices.face_parser_models)
group_face_masker.add_argument('--face-mask-types', help = wording.get('help.face_mask_types').format(choices = ', '.join(facefusion.choices.face_mask_types)), default = config.get_str_list('face_masker.face_mask_types', 'box'), choices = facefusion.choices.face_mask_types, nargs = '+', metavar = 'FACE_MASK_TYPES') group_face_masker.add_argument('--face-mask-types', help = wording.get('help.face_mask_types').format(choices = ', '.join(facefusion.choices.face_mask_types)), default = config.get_str_list('face_masker.face_mask_types', 'box'), choices = facefusion.choices.face_mask_types, nargs = '+', metavar = 'FACE_MASK_TYPES')
group_face_masker.add_argument('--face-mask-blur', help = wording.get('help.face_mask_blur'), type = float, default = config.get_float_value('face_masker.face_mask_blur', '0.3'), choices = facefusion.choices.face_mask_blur_range, metavar = create_float_metavar(facefusion.choices.face_mask_blur_range)) group_face_masker.add_argument('--face-mask-blur', help = wording.get('help.face_mask_blur'), type = float, default = config.get_float_value('face_masker.face_mask_blur', '0.3'), choices = facefusion.choices.face_mask_blur_range, metavar = create_float_metavar(facefusion.choices.face_mask_blur_range))
group_face_masker.add_argument('--face-mask-padding', help = wording.get('help.face_mask_padding'), type = int, default = config.get_int_list('face_masker.face_mask_padding', '0 0 0 0'), nargs = '+') group_face_masker.add_argument('--face-mask-padding', help = wording.get('help.face_mask_padding'), type = int, default = config.get_int_list('face_masker.face_mask_padding', '0 0 0 0'), nargs = '+')
group_face_masker.add_argument('--face-mask-regions', help = wording.get('help.face_mask_regions').format(choices = ', '.join(facefusion.choices.face_mask_regions)), default = config.get_str_list('face_masker.face_mask_regions', ' '.join(facefusion.choices.face_mask_regions)), choices = facefusion.choices.face_mask_regions, nargs = '+', metavar = 'FACE_MASK_REGIONS') group_face_masker.add_argument('--face-mask-regions', help = wording.get('help.face_mask_regions').format(choices = ', '.join(facefusion.choices.face_mask_regions)), default = config.get_str_list('face_masker.face_mask_regions', ' '.join(facefusion.choices.face_mask_regions)), choices = facefusion.choices.face_mask_regions, nargs = '+', metavar = 'FACE_MASK_REGIONS')
job_store.register_step_keys([ 'face_occluder_model', 'face_parser_model', 'face_mask_types', 'face_mask_blur', 'face_mask_padding', 'face_mask_regions' ]) job_store.register_step_keys([ 'face_mask_types', 'face_mask_blur', 'face_mask_padding', 'face_mask_regions' ])
return program return program
@ -171,7 +169,7 @@ def create_output_creation_program() -> ArgumentParser:
def create_processors_program() -> ArgumentParser: def create_processors_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ] available_processors = list_directory('facefusion/processors/modules')
group_processors = program.add_argument_group('processors') group_processors = program.add_argument_group('processors')
group_processors.add_argument('--processors', help = wording.get('help.processors').format(choices = ', '.join(available_processors)), default = config.get_str_list('processors.processors', 'face_swapper'), nargs = '+') group_processors.add_argument('--processors', help = wording.get('help.processors').format(choices = ', '.join(available_processors)), default = config.get_str_list('processors.processors', 'face_swapper'), nargs = '+')
job_store.register_step_keys([ 'processors' ]) job_store.register_step_keys([ 'processors' ])
@ -182,7 +180,7 @@ def create_processors_program() -> ArgumentParser:
def create_uis_program() -> ArgumentParser: def create_uis_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
available_ui_layouts = [ file.get('name') for file in list_directory('facefusion/uis/layouts') ] available_ui_layouts = list_directory('facefusion/uis/layouts')
group_uis = program.add_argument_group('uis') group_uis = program.add_argument_group('uis')
group_uis.add_argument('--open-browser', help = wording.get('help.open_browser'), action = 'store_true', default = config.get_bool_value('uis.open_browser')) group_uis.add_argument('--open-browser', help = wording.get('help.open_browser'), action = 'store_true', default = config.get_bool_value('uis.open_browser'))
group_uis.add_argument('--ui-layouts', help = wording.get('help.ui_layouts').format(choices = ', '.join(available_ui_layouts)), default = config.get_str_list('uis.ui_layouts', 'default'), nargs = '+') group_uis.add_argument('--ui-layouts', help = wording.get('help.ui_layouts').format(choices = ', '.join(available_ui_layouts)), default = config.get_str_list('uis.ui_layouts', 'default'), nargs = '+')
@ -192,10 +190,9 @@ def create_uis_program() -> ArgumentParser:
def create_execution_program() -> ArgumentParser: def create_execution_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
available_execution_providers = get_available_execution_providers()
group_execution = program.add_argument_group('execution') group_execution = program.add_argument_group('execution')
group_execution.add_argument('--execution-device-id', help = wording.get('help.execution_device_id'), default = config.get_str_value('execution.execution_device_id', '0')) group_execution.add_argument('--execution-device-id', help = wording.get('help.execution_device_id'), default = config.get_str_value('execution.execution_device_id', '0'))
group_execution.add_argument('--execution-providers', help = wording.get('help.execution_providers').format(choices = ', '.join(available_execution_providers)), default = config.get_str_list('execution.execution_providers', 'cpu'), choices = available_execution_providers, nargs = '+', metavar = 'EXECUTION_PROVIDERS') group_execution.add_argument('--execution-providers', help = wording.get('help.execution_providers').format(choices = ', '.join(list(get_execution_provider_set().keys()))), default = config.get_str_list('execution.execution_providers', 'cpu'), choices = list(get_execution_provider_set().keys()), nargs = '+', metavar = 'EXECUTION_PROVIDERS')
group_execution.add_argument('--execution-thread-count', help = wording.get('help.execution_thread_count'), type = int, default = config.get_int_value('execution.execution_thread_count', '4'), choices = facefusion.choices.execution_thread_count_range, metavar = create_int_metavar(facefusion.choices.execution_thread_count_range)) group_execution.add_argument('--execution-thread-count', help = wording.get('help.execution_thread_count'), type = int, default = config.get_int_value('execution.execution_thread_count', '4'), choices = facefusion.choices.execution_thread_count_range, metavar = create_int_metavar(facefusion.choices.execution_thread_count_range))
group_execution.add_argument('--execution-queue-count', help = wording.get('help.execution_queue_count'), type = int, default = config.get_int_value('execution.execution_queue_count', '1'), choices = facefusion.choices.execution_queue_count_range, metavar = create_int_metavar(facefusion.choices.execution_queue_count_range)) group_execution.add_argument('--execution-queue-count', help = wording.get('help.execution_queue_count'), type = int, default = config.get_int_value('execution.execution_queue_count', '1'), choices = facefusion.choices.execution_queue_count_range, metavar = create_int_metavar(facefusion.choices.execution_queue_count_range))
job_store.register_job_keys([ 'execution_device_id', 'execution_providers', 'execution_thread_count', 'execution_queue_count' ]) job_store.register_job_keys([ 'execution_device_id', 'execution_providers', 'execution_thread_count', 'execution_queue_count' ])
@ -204,9 +201,8 @@ def create_execution_program() -> ArgumentParser:
def create_download_providers_program() -> ArgumentParser: def create_download_providers_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
download_providers = list(facefusion.choices.download_provider_set.keys())
group_download = program.add_argument_group('download') group_download = program.add_argument_group('download')
group_download.add_argument('--download-providers', help = wording.get('help.download_providers').format(choices = ', '.join(download_providers)), default = config.get_str_list('download.download_providers', ' '.join(facefusion.choices.download_providers)), choices = download_providers, nargs = '+', metavar = 'DOWNLOAD_PROVIDERS') group_download.add_argument('--download-providers', help = wording.get('help.download_providers').format(choices = ', '.join(list(facefusion.choices.download_provider_set.keys()))), default = config.get_str_list('download.download_providers', 'github'), choices = list(facefusion.choices.download_provider_set.keys()), nargs = '+', metavar = 'DOWNLOAD_PROVIDERS')
job_store.register_job_keys([ 'download_providers' ]) job_store.register_job_keys([ 'download_providers' ])
return program return program
@ -214,7 +210,7 @@ def create_download_providers_program() -> ArgumentParser:
def create_download_scope_program() -> ArgumentParser: def create_download_scope_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
group_download = program.add_argument_group('download') group_download = program.add_argument_group('download')
group_download.add_argument('--download-scope', help = wording.get('help.download_scope'), default = config.get_str_value('download.download_scope', 'lite'), choices = facefusion.choices.download_scopes) group_download.add_argument('--download-scope', help = wording.get('help.download_scope'), default = config.get_str_value('download.download_scope', 'lite'), choices = list(facefusion.choices.download_scopes))
job_store.register_job_keys([ 'download_scope' ]) job_store.register_job_keys([ 'download_scope' ])
return program return program
@ -230,9 +226,8 @@ def create_memory_program() -> ArgumentParser:
def create_misc_program() -> ArgumentParser: def create_misc_program() -> ArgumentParser:
program = ArgumentParser(add_help = False) program = ArgumentParser(add_help = False)
log_level_keys = list(facefusion.choices.log_level_set.keys())
group_misc = program.add_argument_group('misc') group_misc = program.add_argument_group('misc')
group_misc.add_argument('--log-level', help = wording.get('help.log_level'), default = config.get_str_value('misc.log_level', 'info'), choices = log_level_keys) group_misc.add_argument('--log-level', help = wording.get('help.log_level'), default = config.get_str_value('misc.log_level', 'info'), choices = list(facefusion.choices.log_level_set.keys()))
job_store.register_job_keys([ 'log_level' ]) job_store.register_job_keys([ 'log_level' ])
return program return program

View File

@ -0,0 +1,242 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Demo</title>
<style>
.preview, meter {
width: 100%;
}
meter { border-radius: 0}
img, textarea {
width: 100%;
}
input[type="range"] {
width: 100%;
margin-top: 1em;
}
</style>
</head>
<body>
<div style="display:flex">
<div class="preview">
<h1>Preview</h1>
<div class="image-container">
<img id="image" alt="Frame Image">
</div>
<input type="range" id="slider" value="0">
<p>Frame: <span id="frameValue">0</span></p>
<button id="playBtn">Play</button>
<button id="stopBtn" disabled>Stop</button>
<input type="checkbox" id="useWebSocket" /> Use WebSocket for Preview
</div>
<div>
<h1>Debug</h1>
<p>Video Memory <meter id="video_memory" min="0" max="100" value="0"></meter></p>
<p>GPU Utilization <meter id="gpu_utilization" min="0" max="100" value="0"></meter></p>
<textarea id="debug" rows="1" cols="80" readonly></textarea>
<textarea id="devices" rows="4" cols="80" readonly></textarea>
<textarea id="state" rows="30" cols="80" readonly></textarea>
<textarea id="log" rows="10" cols="80" readonly></textarea>
<textarea id="fps" rows="2" cols="80" readonly></textarea>
</div>
</div>
<script>
function createWebSocketConnection(url, debug, container) {
const socket = new WebSocket(url);
socket.onopen = () => {
debug.value = `WebSocket connection established for URL: ${url}`;
};
socket.onmessage = event => {
debug.value = `WebSocket Event: ${event.type}`;
container.value = event.data;
};
socket.onerror = error => {
debug.value = `WebSocket Error: ${error}`;
};
socket.onclose = () => {
debug.value = `WebSocket connection closed for URL: ${url} -> Reloading Page`;
setTimeout(() => location.reload(), 1000);
};
return socket;
}
devicesSocket = createWebSocketConnection('ws://127.0.0.1:8000/execution/devices', debug, devices);
createWebSocketConnection('ws://127.0.0.1:8000/state', debug, state);
devicesSocket.addEventListener('message', event => {
const data = JSON.parse(event.data)[0]
const freeMemory = data.video_memory.free.value;
const totalMemory = data.video_memory.total.value;
const usedMemory = totalMemory - freeMemory;
const usedMemoryPercentage = (usedMemory / totalMemory) * 100;
video_memory.value = usedMemoryPercentage;
gpu_utilization.value = data.utilization.gpu.value
})
</script>
<script>
const slider = document.getElementById('slider');
const image = document.getElementById('image');
const frameValue = document.getElementById('frameValue');
const playBtn = document.getElementById('playBtn');
const stopBtn = document.getElementById('stopBtn');
const logTextarea = document.getElementById('log');
const useWebSocketCheckbox = document.getElementById('useWebSocket');
let totalFrames = 0;
let currentFrame = 0;
let totalFps = 0;
let requestCount = 0;
let isPlaying = false;
let socket;
// Fetch the total frame count and set up the slider
async function fetchSliderTotal() {
try {
const start = performance.now();
const response = await fetch('http://127.0.0.1:8000/ui/preview_slider');
const end = performance.now();
logRequest('GET', 'http://127.0.0.1:8000/ui/preview_slider', start, end);
if (response.ok) {
const data = await response.json();
totalFrames = data.video_frame_total;
slider.max = totalFrames;
slider.value = 0;
frameValue.textContent = 0;
image.src = `http://127.0.0.1:8000/preview?frame_number=0`;
} else {
console.error('Failed to fetch total frame count');
}
} catch (error) {
console.error('Error fetching total frame count:', error);
}
}
// Function to log request details to the textarea
function logRequest(method, url, startTime, endTime) {
const duration = (endTime - startTime).toFixed(2);
const logMessage = `${method} ${url} | Duration: ${duration}ms\n`;
// Append to the log textarea
logTextarea.value += logMessage;
logTextarea.scrollTop = logTextarea.scrollHeight; // Auto scroll to the bottom
}
function logFps(startTime, endTime) {
const duration = (endTime - startTime).toFixed(2);
const durationInSeconds = duration / 1000; // Convert ms to seconds
const fps = (1 / durationInSeconds).toFixed(2); // FPS for this request
// Update total FPS and request count
totalFps += parseFloat(fps);
requestCount++;
// Calculate average FPS
const averageFps = (totalFps / requestCount).toFixed(2);
// Update the textarea with id 'fps' to show the average FPS
const fpsTextarea = document.getElementById('fps');
fpsTextarea.value = `Average FPS: ${averageFps}\n`;
// Optionally, you can append the current FPS to the textarea as well:
fpsTextarea.value += `Current FPS: ${fps}\n`;
}
// Function to update the image based on the slider's value or WebSocket message
function updateImage() {
const frameNumber = slider.value;
// If WebSocket is enabled, use WebSocket to fetch the image
if (useWebSocketCheckbox.checked) {
image.onload = null
if (!socket) {
socket = new WebSocket('ws://127.0.0.1:8000/preview');
}
const start = performance.now();
socket.send(JSON.stringify({ frame_number: frameNumber }));
socket.onmessage = function (event) {
const end = performance.now();
logRequest('WEBSOCKET', 'ws://127.0.0.1:8000/preview', start, end);
logFps(start, end)
// Create a Blob URL from the WebSocket message (assumed to be a Blob)
const imageUrl = URL.createObjectURL(event.data);
// Set the image source to the Blob URL
image.src = imageUrl;
frameValue.textContent = frameNumber;
// Continue if playing
if (isPlaying && currentFrame < totalFrames) {
currentFrame++;
slider.value = currentFrame;
updateImage(); // Continue to next frame
}
};
} else {
socket = null
// Use default fetch for the image
const start = performance.now();
image.src = `http://127.0.0.1:8000/preview?frame_number=${frameNumber}`;
image.onload = function () {
const end = performance.now();
logRequest('GET', image.src, start, end);
logFps(start, end)
frameValue.textContent = frameNumber;
// Continue if playing
if (isPlaying && currentFrame < totalFrames) {
currentFrame++;
slider.value = currentFrame;
updateImage(); // Continue to next frame
}
};
}
}
// Function to start the play action (without setInterval, only on image load)
function startPlay() {
playBtn.disabled = true;
stopBtn.disabled = false;
isPlaying = true;
currentFrame = parseInt(slider.value, 10);
// Start loading the first image
updateImage();
}
// Function to stop the play action
function stopPlay() {
isPlaying = false;
playBtn.disabled = false;
stopBtn.disabled = true;
if (socket) {
socket.close(); // Close WebSocket when stopping
}
}
// Event listeners for Play/Stop buttons
playBtn.addEventListener('click', startPlay);
stopBtn.addEventListener('click', stopPlay);
// Slider manual update
slider.addEventListener('change', function () {
currentFrame = slider.value;
updateImage();
});
// Fetch the total number of frames when the page loads
window.onload = fetchSliderTotal;
</script>
</body>
</html>

View File

@ -101,11 +101,8 @@ FaceLandmarkerModel = Literal['many', '2dfan4', 'peppa_wutz']
FaceDetectorSet = Dict[FaceDetectorModel, List[str]] FaceDetectorSet = Dict[FaceDetectorModel, List[str]]
FaceSelectorMode = Literal['many', 'one', 'reference'] FaceSelectorMode = Literal['many', 'one', 'reference']
FaceSelectorOrder = Literal['left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small', 'best-worst', 'worst-best'] FaceSelectorOrder = Literal['left-right', 'right-left', 'top-bottom', 'bottom-top', 'small-large', 'large-small', 'best-worst', 'worst-best']
FaceOccluderModel = Literal['xseg_1', 'xseg_2']
FaceParserModel = Literal['bisenet_resnet_18', 'bisenet_resnet_34']
FaceMaskType = Literal['box', 'occlusion', 'region'] FaceMaskType = Literal['box', 'occlusion', 'region']
FaceMaskRegion = Literal['skin', 'left-eyebrow', 'right-eyebrow', 'left-eye', 'right-eye', 'glasses', 'nose', 'mouth', 'upper-lip', 'lower-lip'] FaceMaskRegion = Literal['skin', 'left-eyebrow', 'right-eyebrow', 'left-eye', 'right-eye', 'glasses', 'nose', 'mouth', 'upper-lip', 'lower-lip']
FaceMaskRegionSet = Dict[FaceMaskRegion, int]
TempFrameFormat = Literal['bmp', 'jpg', 'png'] TempFrameFormat = Literal['bmp', 'jpg', 'png']
OutputAudioEncoder = Literal['aac', 'libmp3lame', 'libopus', 'libvorbis'] OutputAudioEncoder = Literal['aac', 'libmp3lame', 'libopus', 'libvorbis']
OutputVideoEncoder = Literal['libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc', 'h264_amf', 'hevc_amf','h264_qsv', 'hevc_qsv', 'h264_videotoolbox', 'hevc_videotoolbox'] OutputVideoEncoder = Literal['libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc', 'h264_amf', 'hevc_amf','h264_qsv', 'hevc_qsv', 'h264_videotoolbox', 'hevc_videotoolbox']
@ -115,9 +112,9 @@ ModelOptions = Dict[str, Any]
ModelSet = Dict[str, ModelOptions] ModelSet = Dict[str, ModelOptions]
ModelInitializer = NDArray[Any] ModelInitializer = NDArray[Any]
ExecutionProvider = Literal['cpu', 'coreml', 'cuda', 'directml', 'openvino', 'rocm', 'tensorrt'] ExecutionProviderKey = Literal['cpu', 'coreml', 'cuda', 'directml', 'openvino', 'rocm', 'tensorrt']
ExecutionProviderValue = Literal['CPUExecutionProvider', 'CoreMLExecutionProvider', 'CUDAExecutionProvider', 'DmlExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'TensorrtExecutionProvider'] ExecutionProviderValue = Literal['CPUExecutionProvider', 'CoreMLExecutionProvider', 'CUDAExecutionProvider', 'DmlExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'TensorrtExecutionProvider']
ExecutionProviderSet = Dict[ExecutionProvider, ExecutionProviderValue] ExecutionProviderSet = Dict[ExecutionProviderKey, ExecutionProviderValue]
ValueAndUnit = TypedDict('ValueAndUnit', ValueAndUnit = TypedDict('ValueAndUnit',
{ {
'value' : int, 'value' : int,
@ -158,13 +155,8 @@ ExecutionDevice = TypedDict('ExecutionDevice',
'utilization' : ExecutionDeviceUtilization 'utilization' : ExecutionDeviceUtilization
}) })
DownloadProvider = Literal['github', 'huggingface'] DownloadProviderKey = Literal['github', 'huggingface']
DownloadProviderValue = TypedDict('DownloadProviderValue', DownloadProviderSet = Dict[DownloadProviderKey, str]
{
'url' : str,
'path' : str
})
DownloadProviderSet = Dict[DownloadProvider, DownloadProviderValue]
DownloadScope = Literal['lite', 'full'] DownloadScope = Literal['lite', 'full']
Download = TypedDict('Download', Download = TypedDict('Download',
{ {
@ -175,13 +167,6 @@ DownloadSet = Dict[str, Download]
VideoMemoryStrategy = Literal['strict', 'moderate', 'tolerant'] VideoMemoryStrategy = Literal['strict', 'moderate', 'tolerant']
File = TypedDict('File',
{
'name' : str,
'extension' : str,
'path': str
})
AppContext = Literal['cli', 'ui'] AppContext = Literal['cli', 'ui']
InferencePool = Dict[str, InferenceSession] InferencePool = Dict[str, InferenceSession]
@ -239,8 +224,6 @@ StateKey = Literal\
'reference_face_position', 'reference_face_position',
'reference_face_distance', 'reference_face_distance',
'reference_frame_number', 'reference_frame_number',
'face_occluder_model',
'face_parser_model',
'face_mask_types', 'face_mask_types',
'face_mask_blur', 'face_mask_blur',
'face_mask_padding', 'face_mask_padding',
@ -302,8 +285,6 @@ State = TypedDict('State',
'reference_face_position' : int, 'reference_face_position' : int,
'reference_face_distance' : float, 'reference_face_distance' : float,
'reference_frame_number' : int, 'reference_frame_number' : int,
'face_occluder_model' : FaceOccluderModel,
'face_parser_model' : FaceParserModel,
'face_mask_types' : List[FaceMaskType], 'face_mask_types' : List[FaceMaskType],
'face_mask_blur' : float, 'face_mask_blur' : float,
'face_mask_padding' : Padding, 'face_mask_padding' : Padding,
@ -326,10 +307,10 @@ State = TypedDict('State',
'ui_layouts' : List[str], 'ui_layouts' : List[str],
'ui_workflow' : UiWorkflow, 'ui_workflow' : UiWorkflow,
'execution_device_id' : str, 'execution_device_id' : str,
'execution_providers' : List[ExecutionProvider], 'execution_providers' : List[ExecutionProviderKey],
'execution_thread_count' : int, 'execution_thread_count' : int,
'execution_queue_count' : int, 'execution_queue_count' : int,
'download_providers' : List[DownloadProvider], 'download_providers' : List[DownloadProviderKey],
'download_scope' : DownloadScope, 'download_scope' : DownloadScope,
'video_memory_strategy' : VideoMemoryStrategy, 'video_memory_strategy' : VideoMemoryStrategy,
'system_memory_limit' : int, 'system_memory_limit' : int,

View File

@ -65,56 +65,35 @@
min-height: unset; min-height: unset;
} }
:root:root:root:root .tab-wrapper :root:root:root:root .tabs button:hover
{ {
padding: 0 0.625rem; background: unset;
} }
:root:root:root:root .tab-container :root:root:root:root .tab-container
{ {
gap: 0.5em; height: 2.5rem;
} }
:root:root:root:root .tab-container button :root:root:root:root .tabitem
{ {
background: unset; padding: 0.75rem 0 0 0
border-bottom: 0.125rem solid;
} }
:root:root:root:root .tab-container button.selected :root:root:root:root .tab-container:after,
:root:root:root:root .tabs button:after
{ {
color: var(--primary-500) border-width: 0.125rem;
} }
:root:root:root:root .toast-body :root:root:root:root .tab-container:after
{ {
background: white; border-color: var(--block-background-fill)
color: var(--primary-500);
border: unset;
border-radius: unset;
}
:root:root:root:root .dark .toast-body
{
background: var(--neutral-900);
color: var(--primary-600);
}
:root:root:root:root .toast-icon,
:root:root:root:root .toast-title,
:root:root:root:root .toast-text,
:root:root:root:root .toast-close
{
color: unset;
}
:root:root:root:root .toast-body .timer
{
background: currentColor;
} }
:root:root:root:root .slider_input_container > span, :root:root:root:root .slider_input_container > span,
:root:root:root:root .feather-upload, :root:root:root:root .feather-upload,
:root:root:root:root .toast-wrap,
:root:root:root:root footer :root:root:root:root footer
{ {
display: none; display: none;

View File

@ -2,11 +2,11 @@ from typing import List, Optional
import gradio import gradio
import facefusion.choices
from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, state_manager, voice_extractor, wording from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, state_manager, voice_extractor, wording
from facefusion.choices import download_provider_set
from facefusion.filesystem import list_directory from facefusion.filesystem import list_directory
from facefusion.processors.core import get_processors_modules from facefusion.processors.core import get_processors_modules
from facefusion.typing import DownloadProvider from facefusion.typing import DownloadProviderKey
DOWNLOAD_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None DOWNLOAD_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
@ -16,7 +16,7 @@ def render() -> None:
DOWNLOAD_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup( DOWNLOAD_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.download_providers_checkbox_group'), label = wording.get('uis.download_providers_checkbox_group'),
choices = facefusion.choices.download_providers, choices = list(download_provider_set.keys()),
value = state_manager.get_item('download_providers') value = state_manager.get_item('download_providers')
) )
@ -25,7 +25,7 @@ def listen() -> None:
DOWNLOAD_PROVIDERS_CHECKBOX_GROUP.change(update_download_providers, inputs = DOWNLOAD_PROVIDERS_CHECKBOX_GROUP, outputs = DOWNLOAD_PROVIDERS_CHECKBOX_GROUP) DOWNLOAD_PROVIDERS_CHECKBOX_GROUP.change(update_download_providers, inputs = DOWNLOAD_PROVIDERS_CHECKBOX_GROUP, outputs = DOWNLOAD_PROVIDERS_CHECKBOX_GROUP)
def update_download_providers(download_providers : List[DownloadProvider]) -> gradio.CheckboxGroup: def update_download_providers(download_providers : List[DownloadProviderKey]) -> gradio.CheckboxGroup:
common_modules =\ common_modules =\
[ [
content_analyser, content_analyser,
@ -36,13 +36,13 @@ def update_download_providers(download_providers : List[DownloadProvider]) -> gr
face_masker, face_masker,
voice_extractor voice_extractor
] ]
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ] available_processors = list_directory('facefusion/processors/modules')
processor_modules = get_processors_modules(available_processors) processor_modules = get_processors_modules(available_processors)
for module in common_modules + processor_modules: for module in common_modules + processor_modules:
if hasattr(module, 'create_static_model_set'): if hasattr(module, 'create_static_model_set'):
module.create_static_model_set.cache_clear() module.create_static_model_set.cache_clear()
download_providers = download_providers or facefusion.choices.download_providers download_providers = download_providers or list(download_provider_set.keys())
state_manager.set_item('download_providers', download_providers) state_manager.set_item('download_providers', download_providers)
return gradio.CheckboxGroup(value = state_manager.get_item('download_providers')) return gradio.CheckboxGroup(value = state_manager.get_item('download_providers'))

View File

@ -3,10 +3,10 @@ from typing import List, Optional
import gradio import gradio
from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, state_manager, voice_extractor, wording from facefusion import content_analyser, face_classifier, face_detector, face_landmarker, face_masker, face_recognizer, state_manager, voice_extractor, wording
from facefusion.execution import get_available_execution_providers from facefusion.execution import get_execution_provider_set
from facefusion.filesystem import list_directory from facefusion.filesystem import list_directory
from facefusion.processors.core import get_processors_modules from facefusion.processors.core import get_processors_modules
from facefusion.typing import ExecutionProvider from facefusion.typing import ExecutionProviderKey
EXECUTION_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None EXECUTION_PROVIDERS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
@ -16,7 +16,7 @@ def render() -> None:
EXECUTION_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup( EXECUTION_PROVIDERS_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.execution_providers_checkbox_group'), label = wording.get('uis.execution_providers_checkbox_group'),
choices = get_available_execution_providers(), choices = list(get_execution_provider_set().keys()),
value = state_manager.get_item('execution_providers') value = state_manager.get_item('execution_providers')
) )
@ -25,7 +25,7 @@ def listen() -> None:
EXECUTION_PROVIDERS_CHECKBOX_GROUP.change(update_execution_providers, inputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP, outputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP) EXECUTION_PROVIDERS_CHECKBOX_GROUP.change(update_execution_providers, inputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP, outputs = EXECUTION_PROVIDERS_CHECKBOX_GROUP)
def update_execution_providers(execution_providers : List[ExecutionProvider]) -> gradio.CheckboxGroup: def update_execution_providers(execution_providers : List[ExecutionProviderKey]) -> gradio.CheckboxGroup:
common_modules =\ common_modules =\
[ [
content_analyser, content_analyser,
@ -36,13 +36,13 @@ def update_execution_providers(execution_providers : List[ExecutionProvider]) ->
face_recognizer, face_recognizer,
voice_extractor voice_extractor
] ]
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ] available_processors = list_directory('facefusion/processors/modules')
processor_modules = get_processors_modules(available_processors) processor_modules = get_processors_modules(available_processors)
for module in common_modules + processor_modules: for module in common_modules + processor_modules:
if hasattr(module, 'clear_inference_pool'): if hasattr(module, 'clear_inference_pool'):
module.clear_inference_pool() module.clear_inference_pool()
execution_providers = execution_providers or get_available_execution_providers() execution_providers = execution_providers or list(get_execution_provider_set())
state_manager.set_item('execution_providers', execution_providers) state_manager.set_item('execution_providers', execution_providers)
return gradio.CheckboxGroup(value = state_manager.get_item('execution_providers')) return gradio.CheckboxGroup(value = state_manager.get_item('execution_providers'))

View File

@ -31,7 +31,7 @@ def render() -> None:
with gradio.Row(): with gradio.Row():
FACE_DETECTOR_MODEL_DROPDOWN = gradio.Dropdown( FACE_DETECTOR_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_detector_model_dropdown'), label = wording.get('uis.face_detector_model_dropdown'),
choices = facefusion.choices.face_detector_models, choices = list(facefusion.choices.face_detector_set.keys()),
value = state_manager.get_item('face_detector_model') value = state_manager.get_item('face_detector_model')
) )
FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(**face_detector_size_dropdown_options) FACE_DETECTOR_SIZE_DROPDOWN = gradio.Dropdown(**face_detector_size_dropdown_options)

View File

@ -3,13 +3,11 @@ from typing import List, Optional, Tuple
import gradio import gradio
import facefusion.choices import facefusion.choices
from facefusion import face_masker, state_manager, wording from facefusion import state_manager, wording
from facefusion.common_helper import calc_float_step, calc_int_step from facefusion.common_helper import calc_float_step, calc_int_step
from facefusion.typing import FaceMaskRegion, FaceMaskType, FaceOccluderModel, FaceParserModel from facefusion.typing import FaceMaskRegion, FaceMaskType
from facefusion.uis.core import register_ui_component from facefusion.uis.core import register_ui_component
FACE_OCCLUDER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_PARSER_MODEL_DROPDOWN : Optional[gradio.Dropdown] = None
FACE_MASK_TYPES_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None FACE_MASK_TYPES_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
FACE_MASK_REGIONS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None FACE_MASK_REGIONS_CHECKBOX_GROUP : Optional[gradio.CheckboxGroup] = None
FACE_MASK_BLUR_SLIDER : Optional[gradio.Slider] = None FACE_MASK_BLUR_SLIDER : Optional[gradio.Slider] = None
@ -20,8 +18,6 @@ FACE_MASK_PADDING_LEFT_SLIDER : Optional[gradio.Slider] = None
def render() -> None: def render() -> None:
global FACE_OCCLUDER_MODEL_DROPDOWN
global FACE_PARSER_MODEL_DROPDOWN
global FACE_MASK_TYPES_CHECKBOX_GROUP global FACE_MASK_TYPES_CHECKBOX_GROUP
global FACE_MASK_REGIONS_CHECKBOX_GROUP global FACE_MASK_REGIONS_CHECKBOX_GROUP
global FACE_MASK_BLUR_SLIDER global FACE_MASK_BLUR_SLIDER
@ -32,17 +28,6 @@ def render() -> None:
has_box_mask = 'box' in state_manager.get_item('face_mask_types') has_box_mask = 'box' in state_manager.get_item('face_mask_types')
has_region_mask = 'region' in state_manager.get_item('face_mask_types') has_region_mask = 'region' in state_manager.get_item('face_mask_types')
with gradio.Row():
FACE_OCCLUDER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_occluder_model_dropdown'),
choices = facefusion.choices.face_occluder_models,
value = state_manager.get_item('face_occluder_model')
)
FACE_PARSER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_parser_model_dropdown'),
choices = facefusion.choices.face_parser_models,
value = state_manager.get_item('face_parser_model')
)
FACE_MASK_TYPES_CHECKBOX_GROUP = gradio.CheckboxGroup( FACE_MASK_TYPES_CHECKBOX_GROUP = gradio.CheckboxGroup(
label = wording.get('uis.face_mask_types_checkbox_group'), label = wording.get('uis.face_mask_types_checkbox_group'),
choices = facefusion.choices.face_mask_types, choices = facefusion.choices.face_mask_types,
@ -97,8 +82,6 @@ def render() -> None:
value = state_manager.get_item('face_mask_padding')[3], value = state_manager.get_item('face_mask_padding')[3],
visible = has_box_mask visible = has_box_mask
) )
register_ui_component('face_occluder_model_dropdown', FACE_OCCLUDER_MODEL_DROPDOWN)
register_ui_component('face_parser_model_dropdown', FACE_PARSER_MODEL_DROPDOWN)
register_ui_component('face_mask_types_checkbox_group', FACE_MASK_TYPES_CHECKBOX_GROUP) register_ui_component('face_mask_types_checkbox_group', FACE_MASK_TYPES_CHECKBOX_GROUP)
register_ui_component('face_mask_regions_checkbox_group', FACE_MASK_REGIONS_CHECKBOX_GROUP) register_ui_component('face_mask_regions_checkbox_group', FACE_MASK_REGIONS_CHECKBOX_GROUP)
register_ui_component('face_mask_blur_slider', FACE_MASK_BLUR_SLIDER) register_ui_component('face_mask_blur_slider', FACE_MASK_BLUR_SLIDER)
@ -109,8 +92,6 @@ def render() -> None:
def listen() -> None: def listen() -> None:
FACE_OCCLUDER_MODEL_DROPDOWN.change(update_face_occluder_model, inputs = FACE_OCCLUDER_MODEL_DROPDOWN)
FACE_PARSER_MODEL_DROPDOWN.change(update_face_parser_model, inputs = FACE_PARSER_MODEL_DROPDOWN)
FACE_MASK_TYPES_CHECKBOX_GROUP.change(update_face_mask_types, inputs = FACE_MASK_TYPES_CHECKBOX_GROUP, outputs = [ FACE_MASK_TYPES_CHECKBOX_GROUP, FACE_MASK_REGIONS_CHECKBOX_GROUP, FACE_MASK_BLUR_SLIDER, FACE_MASK_PADDING_TOP_SLIDER, FACE_MASK_PADDING_RIGHT_SLIDER, FACE_MASK_PADDING_BOTTOM_SLIDER, FACE_MASK_PADDING_LEFT_SLIDER ]) FACE_MASK_TYPES_CHECKBOX_GROUP.change(update_face_mask_types, inputs = FACE_MASK_TYPES_CHECKBOX_GROUP, outputs = [ FACE_MASK_TYPES_CHECKBOX_GROUP, FACE_MASK_REGIONS_CHECKBOX_GROUP, FACE_MASK_BLUR_SLIDER, FACE_MASK_PADDING_TOP_SLIDER, FACE_MASK_PADDING_RIGHT_SLIDER, FACE_MASK_PADDING_BOTTOM_SLIDER, FACE_MASK_PADDING_LEFT_SLIDER ])
FACE_MASK_REGIONS_CHECKBOX_GROUP.change(update_face_mask_regions, inputs = FACE_MASK_REGIONS_CHECKBOX_GROUP, outputs = FACE_MASK_REGIONS_CHECKBOX_GROUP) FACE_MASK_REGIONS_CHECKBOX_GROUP.change(update_face_mask_regions, inputs = FACE_MASK_REGIONS_CHECKBOX_GROUP, outputs = FACE_MASK_REGIONS_CHECKBOX_GROUP)
FACE_MASK_BLUR_SLIDER.release(update_face_mask_blur, inputs = FACE_MASK_BLUR_SLIDER) FACE_MASK_BLUR_SLIDER.release(update_face_mask_blur, inputs = FACE_MASK_BLUR_SLIDER)
@ -119,24 +100,6 @@ def listen() -> None:
face_mask_padding_slider.release(update_face_mask_padding, inputs = face_mask_padding_sliders) face_mask_padding_slider.release(update_face_mask_padding, inputs = face_mask_padding_sliders)
def update_face_occluder_model(face_occluder_model : FaceOccluderModel) -> gradio.Dropdown:
face_masker.clear_inference_pool()
state_manager.set_item('face_occluder_model', face_occluder_model)
if face_masker.pre_check():
return gradio.Dropdown(value = state_manager.get_item('face_occluder_model'))
return gradio.Dropdown()
def update_face_parser_model(face_parser_model : FaceParserModel) -> gradio.Dropdown:
face_masker.clear_inference_pool()
state_manager.set_item('face_parser_model', face_parser_model)
if face_masker.pre_check():
return gradio.Dropdown(value = state_manager.get_item('face_parser_model'))
return gradio.Dropdown()
def update_face_mask_types(face_mask_types : List[FaceMaskType]) -> Tuple[gradio.CheckboxGroup, gradio.CheckboxGroup, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider]: def update_face_mask_types(face_mask_types : List[FaceMaskType]) -> Tuple[gradio.CheckboxGroup, gradio.CheckboxGroup, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider, gradio.Slider]:
face_mask_types = face_mask_types or facefusion.choices.face_mask_types face_mask_types = face_mask_types or facefusion.choices.face_mask_types
state_manager.set_item('face_mask_types', face_mask_types) state_manager.set_item('face_mask_types', face_mask_types)

View File

@ -20,7 +20,7 @@ def render() -> None:
has_face_swapper = 'face_swapper' in state_manager.get_item('processors') has_face_swapper = 'face_swapper' in state_manager.get_item('processors')
FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown( FACE_SWAPPER_MODEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.face_swapper_model_dropdown'), label = wording.get('uis.face_swapper_model_dropdown'),
choices = processors_choices.face_swapper_models, choices = list(processors_choices.face_swapper_set.keys()),
value = state_manager.get_item('face_swapper_model'), value = state_manager.get_item('face_swapper_model'),
visible = has_face_swapper visible = has_face_swapper
) )

View File

@ -162,9 +162,7 @@ def listen() -> None:
'face_detector_model_dropdown', 'face_detector_model_dropdown',
'face_detector_size_dropdown', 'face_detector_size_dropdown',
'face_detector_angles_checkbox_group', 'face_detector_angles_checkbox_group',
'face_landmarker_model_dropdown', 'face_landmarker_model_dropdown'
'face_occluder_model_dropdown',
'face_parser_model_dropdown'
]): ]):
ui_component.change(clear_and_update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE) ui_component.change(clear_and_update_preview_image, inputs = PREVIEW_FRAME_SLIDER, outputs = PREVIEW_IMAGE)

View File

@ -39,5 +39,5 @@ def update_processors(processors : List[str]) -> gradio.CheckboxGroup:
def sort_processors(processors : List[str]) -> List[str]: def sort_processors(processors : List[str]) -> List[str]:
available_processors = [ file.get('name') for file in list_directory('facefusion/processors/modules') ] available_processors = list_directory('facefusion/processors/modules')
return sorted(available_processors, key = lambda processor : processors.index(processor) if processor in processors else len(processors)) return sorted(available_processors, key = lambda processor : processors.index(processor) if processor in processors else len(processors))

View File

@ -7,8 +7,8 @@ from typing import Optional
import gradio import gradio
from tqdm import tqdm from tqdm import tqdm
import facefusion.choices
from facefusion import logger, state_manager, wording from facefusion import logger, state_manager, wording
from facefusion.choices import log_level_set
from facefusion.typing import LogLevel from facefusion.typing import LogLevel
LOG_LEVEL_DROPDOWN : Optional[gradio.Dropdown] = None LOG_LEVEL_DROPDOWN : Optional[gradio.Dropdown] = None
@ -24,7 +24,7 @@ def render() -> None:
LOG_LEVEL_DROPDOWN = gradio.Dropdown( LOG_LEVEL_DROPDOWN = gradio.Dropdown(
label = wording.get('uis.log_level_dropdown'), label = wording.get('uis.log_level_dropdown'),
choices = facefusion.choices.log_levels, choices = list(log_level_set.keys()),
value = state_manager.get_item('log_level') value = state_manager.get_item('log_level')
) )
TERMINAL_TEXTBOX = gradio.Textbox( TERMINAL_TEXTBOX = gradio.Textbox(

View File

@ -2,7 +2,7 @@ import os
import subprocess import subprocess
from collections import deque from collections import deque
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from typing import Deque, Generator, List, Optional from typing import Deque, Generator, Optional
import cv2 import cv2
import gradio import gradio
@ -10,7 +10,7 @@ from tqdm import tqdm
from facefusion import logger, state_manager, wording from facefusion import logger, state_manager, wording
from facefusion.audio import create_empty_audio_frame from facefusion.audio import create_empty_audio_frame
from facefusion.common_helper import get_first, is_windows from facefusion.common_helper import is_windows
from facefusion.content_analyser import analyse_stream from facefusion.content_analyser import analyse_stream
from facefusion.face_analyser import get_average_face, get_many_faces from facefusion.face_analyser import get_average_face, get_many_faces
from facefusion.ffmpeg import open_ffmpeg from facefusion.ffmpeg import open_ffmpeg
@ -27,17 +27,14 @@ WEBCAM_START_BUTTON : Optional[gradio.Button] = None
WEBCAM_STOP_BUTTON : Optional[gradio.Button] = None WEBCAM_STOP_BUTTON : Optional[gradio.Button] = None
def get_webcam_capture(webcam_device_id : int) -> Optional[cv2.VideoCapture]: def get_webcam_capture() -> Optional[cv2.VideoCapture]:
global WEBCAM_CAPTURE global WEBCAM_CAPTURE
if WEBCAM_CAPTURE is None: if WEBCAM_CAPTURE is None:
cv2.setLogLevel(0)
if is_windows(): if is_windows():
webcam_capture = cv2.VideoCapture(webcam_device_id, cv2.CAP_DSHOW) webcam_capture = cv2.VideoCapture(0, cv2.CAP_DSHOW)
else: else:
webcam_capture = cv2.VideoCapture(webcam_device_id) webcam_capture = cv2.VideoCapture(0)
cv2.setLogLevel(3)
if webcam_capture and webcam_capture.isOpened(): if webcam_capture and webcam_capture.isOpened():
WEBCAM_CAPTURE = webcam_capture WEBCAM_CAPTURE = webcam_capture
return WEBCAM_CAPTURE return WEBCAM_CAPTURE
@ -71,35 +68,27 @@ def render() -> None:
def listen() -> None: def listen() -> None:
webcam_device_id_dropdown = get_ui_component('webcam_device_id_dropdown')
webcam_mode_radio = get_ui_component('webcam_mode_radio') webcam_mode_radio = get_ui_component('webcam_mode_radio')
webcam_resolution_dropdown = get_ui_component('webcam_resolution_dropdown') webcam_resolution_dropdown = get_ui_component('webcam_resolution_dropdown')
webcam_fps_slider = get_ui_component('webcam_fps_slider') webcam_fps_slider = get_ui_component('webcam_fps_slider')
source_image = get_ui_component('source_image')
if webcam_device_id_dropdown and webcam_mode_radio and webcam_resolution_dropdown and webcam_fps_slider: if webcam_mode_radio and webcam_resolution_dropdown and webcam_fps_slider:
start_event = WEBCAM_START_BUTTON.click(start, inputs = [ webcam_device_id_dropdown, webcam_mode_radio, webcam_resolution_dropdown, webcam_fps_slider ], outputs = WEBCAM_IMAGE) start_event = WEBCAM_START_BUTTON.click(start, inputs = [ webcam_mode_radio, webcam_resolution_dropdown, webcam_fps_slider ], outputs = WEBCAM_IMAGE)
WEBCAM_STOP_BUTTON.click(stop, cancels = start_event, outputs = WEBCAM_IMAGE) WEBCAM_STOP_BUTTON.click(stop, cancels = start_event)
if source_image:
source_image.change(stop, cancels = start_event, outputs = WEBCAM_IMAGE)
def start(webcam_device_id : int, webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[VisionFrame, None, None]: def start(webcam_mode : WebcamMode, webcam_resolution : str, webcam_fps : Fps) -> Generator[VisionFrame, None, None]:
state_manager.set_item('face_selector_mode', 'one') state_manager.set_item('face_selector_mode', 'one')
source_image_paths = filter_image_paths(state_manager.get_item('source_paths')) source_image_paths = filter_image_paths(state_manager.get_item('source_paths'))
source_frames = read_static_images(source_image_paths) source_frames = read_static_images(source_image_paths)
source_faces = get_many_faces(source_frames) source_faces = get_many_faces(source_frames)
source_face = get_average_face(source_faces) source_face = get_average_face(source_faces)
stream = None stream = None
webcam_capture = None
if webcam_mode in [ 'udp', 'v4l2' ]: if webcam_mode in [ 'udp', 'v4l2' ]:
stream = open_stream(webcam_mode, webcam_resolution, webcam_fps) #type:ignore[arg-type] stream = open_stream(webcam_mode, webcam_resolution, webcam_fps) #type:ignore[arg-type]
webcam_width, webcam_height = unpack_resolution(webcam_resolution) webcam_width, webcam_height = unpack_resolution(webcam_resolution)
webcam_capture = get_webcam_capture()
if isinstance(webcam_device_id, int):
webcam_capture = get_webcam_capture(webcam_device_id)
if webcam_capture and webcam_capture.isOpened(): if webcam_capture and webcam_capture.isOpened():
webcam_capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) #type:ignore[attr-defined] webcam_capture.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*'MJPG')) #type:ignore[attr-defined]
@ -170,22 +159,9 @@ def open_stream(stream_mode : StreamMode, stream_resolution : str, stream_fps :
commands.extend([ '-b:v', '2000k', '-f', 'mpegts', 'udp://localhost:27000?pkt_size=1316' ]) commands.extend([ '-b:v', '2000k', '-f', 'mpegts', 'udp://localhost:27000?pkt_size=1316' ])
if stream_mode == 'v4l2': if stream_mode == 'v4l2':
try: try:
device_name = get_first(os.listdir('/sys/devices/virtual/video4linux')) device_name = os.listdir('/sys/devices/virtual/video4linux')[0]
if device_name: if device_name:
commands.extend([ '-f', 'v4l2', '/dev/' + device_name ]) commands.extend([ '-f', 'v4l2', '/dev/' + device_name ])
except FileNotFoundError: except FileNotFoundError:
logger.error(wording.get('stream_not_loaded').format(stream_mode = stream_mode), __name__) logger.error(wording.get('stream_not_loaded').format(stream_mode = stream_mode), __name__)
return open_ffmpeg(commands) return open_ffmpeg(commands)
def get_available_webcam_ids(webcam_id_start : int, webcam_id_end : int) -> List[int]:
available_webcam_ids = []
for index in range(webcam_id_start, webcam_id_end):
webcam_capture = get_webcam_capture(index)
if webcam_capture and webcam_capture.isOpened():
available_webcam_ids.append(index)
clear_webcam_capture()
return available_webcam_ids

View File

@ -3,29 +3,19 @@ from typing import Optional
import gradio import gradio
from facefusion import wording from facefusion import wording
from facefusion.common_helper import get_first
from facefusion.uis import choices as uis_choices from facefusion.uis import choices as uis_choices
from facefusion.uis.components.webcam import get_available_webcam_ids
from facefusion.uis.core import register_ui_component from facefusion.uis.core import register_ui_component
WEBCAM_DEVICE_ID_DROPDOWN : Optional[gradio.Dropdown] = None
WEBCAM_MODE_RADIO : Optional[gradio.Radio] = None WEBCAM_MODE_RADIO : Optional[gradio.Radio] = None
WEBCAM_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None WEBCAM_RESOLUTION_DROPDOWN : Optional[gradio.Dropdown] = None
WEBCAM_FPS_SLIDER : Optional[gradio.Slider] = None WEBCAM_FPS_SLIDER : Optional[gradio.Slider] = None
def render() -> None: def render() -> None:
global WEBCAM_DEVICE_ID_DROPDOWN
global WEBCAM_MODE_RADIO global WEBCAM_MODE_RADIO
global WEBCAM_RESOLUTION_DROPDOWN global WEBCAM_RESOLUTION_DROPDOWN
global WEBCAM_FPS_SLIDER global WEBCAM_FPS_SLIDER
available_webcam_ids = get_available_webcam_ids(0, 10) or [ 'none' ] #type:ignore[list-item]
WEBCAM_DEVICE_ID_DROPDOWN = gradio.Dropdown(
value = get_first(available_webcam_ids),
label = wording.get('uis.webcam_device_id_dropdown'),
choices = available_webcam_ids
)
WEBCAM_MODE_RADIO = gradio.Radio( WEBCAM_MODE_RADIO = gradio.Radio(
label = wording.get('uis.webcam_mode_radio'), label = wording.get('uis.webcam_mode_radio'),
choices = uis_choices.webcam_modes, choices = uis_choices.webcam_modes,
@ -43,7 +33,6 @@ def render() -> None:
minimum = 1, minimum = 1,
maximum = 60 maximum = 60
) )
register_ui_component('webcam_device_id_dropdown', WEBCAM_DEVICE_ID_DROPDOWN)
register_ui_component('webcam_mode_radio', WEBCAM_MODE_RADIO) register_ui_component('webcam_mode_radio', WEBCAM_MODE_RADIO)
register_ui_component('webcam_resolution_dropdown', WEBCAM_RESOLUTION_DROPDOWN) register_ui_component('webcam_resolution_dropdown', WEBCAM_RESOLUTION_DROPDOWN)
register_ui_component('webcam_fps_slider', WEBCAM_FPS_SLIDER) register_ui_component('webcam_fps_slider', WEBCAM_FPS_SLIDER)

View File

@ -74,7 +74,6 @@ def init() -> None:
os.environ['GRADIO_TEMP_DIR'] = os.path.join(state_manager.get_item('temp_path'), 'gradio') os.environ['GRADIO_TEMP_DIR'] = os.path.join(state_manager.get_item('temp_path'), 'gradio')
warnings.filterwarnings('ignore', category = UserWarning, module = 'gradio') warnings.filterwarnings('ignore', category = UserWarning, module = 'gradio')
gradio.processing_utils._check_allowed = lambda path, check_in_upload_folder: None
def launch() -> None: def launch() -> None:

View File

@ -52,8 +52,6 @@ ComponentName = Literal\
'face_selector_race_dropdown', 'face_selector_race_dropdown',
'face_swapper_model_dropdown', 'face_swapper_model_dropdown',
'face_swapper_pixel_boost_dropdown', 'face_swapper_pixel_boost_dropdown',
'face_occluder_model_dropdown',
'face_parser_model_dropdown',
'frame_colorizer_blend_slider', 'frame_colorizer_blend_slider',
'frame_colorizer_model_dropdown', 'frame_colorizer_model_dropdown',
'frame_colorizer_size_dropdown', 'frame_colorizer_size_dropdown',
@ -73,7 +71,6 @@ ComponentName = Literal\
'target_image', 'target_image',
'target_video', 'target_video',
'ui_workflow_dropdown', 'ui_workflow_dropdown',
'webcam_device_id_dropdown',
'webcam_fps_slider', 'webcam_fps_slider',
'webcam_mode_radio', 'webcam_mode_radio',
'webcam_resolution_dropdown' 'webcam_resolution_dropdown'

View File

@ -5,7 +5,7 @@ import cv2
import numpy import numpy
from cv2.typing import Size from cv2.typing import Size
import facefusion.choices from facefusion.choices import image_template_sizes, video_template_sizes
from facefusion.common_helper import is_windows from facefusion.common_helper import is_windows
from facefusion.filesystem import is_image, is_video, sanitize_path_for_windows from facefusion.filesystem import is_image, is_video, sanitize_path_for_windows
from facefusion.typing import Duration, Fps, Orientation, Resolution, VisionFrame from facefusion.typing import Duration, Fps, Orientation, Resolution, VisionFrame
@ -64,8 +64,8 @@ def create_image_resolutions(resolution : Resolution) -> List[str]:
if resolution: if resolution:
width, height = resolution width, height = resolution
temp_resolutions.append(normalize_resolution(resolution)) temp_resolutions.append(normalize_resolution(resolution))
for image_template_size in facefusion.choices.image_template_sizes: for template_size in image_template_sizes:
temp_resolutions.append(normalize_resolution((width * image_template_size, height * image_template_size))) temp_resolutions.append(normalize_resolution((width * template_size, height * template_size)))
temp_resolutions = sorted(set(temp_resolutions)) temp_resolutions = sorted(set(temp_resolutions))
for temp_resolution in temp_resolutions: for temp_resolution in temp_resolutions:
resolutions.append(pack_resolution(temp_resolution)) resolutions.append(pack_resolution(temp_resolution))
@ -122,36 +122,11 @@ def restrict_video_fps(video_path : str, fps : Fps) -> Fps:
def detect_video_duration(video_path : str) -> Duration: def detect_video_duration(video_path : str) -> Duration:
video_frame_total = count_video_frame_total(video_path) video_frame_total = count_video_frame_total(video_path)
video_fps = detect_video_fps(video_path) video_fps = detect_video_fps(video_path)
if video_frame_total and video_fps: if video_frame_total and video_fps:
return video_frame_total / video_fps return video_frame_total / video_fps
return 0 return 0
def count_trim_frame_total(video_path : str, trim_frame_start : Optional[int], trim_frame_end : Optional[int]) -> int:
trim_frame_start, trim_frame_end = restrict_trim_frame(video_path, trim_frame_start, trim_frame_end)
return trim_frame_end - trim_frame_start
def restrict_trim_frame(video_path : str, trim_frame_start : Optional[int], trim_frame_end : Optional[int]) -> Tuple[int, int]:
video_frame_total = count_video_frame_total(video_path)
if isinstance(trim_frame_start, int):
trim_frame_start = max(0, min(trim_frame_start, video_frame_total))
if isinstance(trim_frame_end, int):
trim_frame_end = max(0, min(trim_frame_end, video_frame_total))
if isinstance(trim_frame_start, int) and isinstance(trim_frame_end, int):
return trim_frame_start, trim_frame_end
if isinstance(trim_frame_start, int):
return trim_frame_start, video_frame_total
if isinstance(trim_frame_end, int):
return 0, trim_frame_end
return 0, video_frame_total
def detect_video_resolution(video_path : str) -> Optional[Resolution]: def detect_video_resolution(video_path : str) -> Optional[Resolution]:
if is_video(video_path): if is_video(video_path):
if is_windows(): if is_windows():
@ -180,11 +155,11 @@ def create_video_resolutions(resolution : Resolution) -> List[str]:
if resolution: if resolution:
width, height = resolution width, height = resolution
temp_resolutions.append(normalize_resolution(resolution)) temp_resolutions.append(normalize_resolution(resolution))
for video_template_size in facefusion.choices.video_template_sizes: for template_size in video_template_sizes:
if width > height: if width > height:
temp_resolutions.append(normalize_resolution((video_template_size * width / height, video_template_size))) temp_resolutions.append(normalize_resolution((template_size * width / height, template_size)))
else: else:
temp_resolutions.append(normalize_resolution((video_template_size, video_template_size * height / width))) temp_resolutions.append(normalize_resolution((template_size, template_size * height / width)))
temp_resolutions = sorted(set(temp_resolutions)) temp_resolutions = sorted(set(temp_resolutions))
for temp_resolution in temp_resolutions: for temp_resolution in temp_resolutions:
resolutions.append(pack_resolution(temp_resolution)) resolutions.append(pack_resolution(temp_resolution))

View File

@ -126,8 +126,6 @@ WORDING : Dict[str, Any] =\
'reference_face_distance': 'specify the similarity between the reference face and target face', 'reference_face_distance': 'specify the similarity between the reference face and target face',
'reference_frame_number': 'specify the frame used to create the reference face', 'reference_frame_number': 'specify the frame used to create the reference face',
# face masker # face masker
'face_occluder_model': 'choose the model responsible for the occlusion mask',
'face_parser_model': 'choose the model responsible for the region mask',
'face_mask_types': 'mix and match different face mask types (choices: {choices})', 'face_mask_types': 'mix and match different face mask types (choices: {choices})',
'face_mask_blur': 'specify the degree of blur applied to the box mask', 'face_mask_blur': 'specify the degree of blur applied to the box mask',
'face_mask_padding': 'apply top, right, bottom and left padding to the box mask', 'face_mask_padding': 'apply top, right, bottom and left padding to the box mask',
@ -287,8 +285,6 @@ WORDING : Dict[str, Any] =\
'face_selector_race_dropdown': 'FACE SELECTOR RACE', 'face_selector_race_dropdown': 'FACE SELECTOR RACE',
'face_swapper_model_dropdown': 'FACE SWAPPER MODEL', 'face_swapper_model_dropdown': 'FACE SWAPPER MODEL',
'face_swapper_pixel_boost_dropdown': 'FACE SWAPPER PIXEL BOOST', 'face_swapper_pixel_boost_dropdown': 'FACE SWAPPER PIXEL BOOST',
'face_occluder_model_dropdown': 'FACE OCCLUDER MODEL',
'face_parser_model_dropdown': 'FACE PARSER MODEL',
'frame_colorizer_blend_slider': 'FRAME COLORIZER BLEND', 'frame_colorizer_blend_slider': 'FRAME COLORIZER BLEND',
'frame_colorizer_model_dropdown': 'FRAME COLORIZER MODEL', 'frame_colorizer_model_dropdown': 'FRAME COLORIZER MODEL',
'frame_colorizer_size_dropdown': 'FRAME COLORIZER SIZE', 'frame_colorizer_size_dropdown': 'FRAME COLORIZER SIZE',
@ -330,7 +326,6 @@ WORDING : Dict[str, Any] =\
'video_memory_strategy_dropdown': 'VIDEO MEMORY STRATEGY', 'video_memory_strategy_dropdown': 'VIDEO MEMORY STRATEGY',
'webcam_fps_slider': 'WEBCAM FPS', 'webcam_fps_slider': 'WEBCAM FPS',
'webcam_image': 'WEBCAM', 'webcam_image': 'WEBCAM',
'webcam_device_id_dropdown': 'WEBCAM DEVICE ID',
'webcam_mode_radio': 'WEBCAM MODE', 'webcam_mode_radio': 'WEBCAM MODE',
'webcam_resolution_dropdown': 'WEBCAM RESOLUTION' 'webcam_resolution_dropdown': 'WEBCAM RESOLUTION'
} }

View File

@ -1,10 +1,9 @@
filetype==1.2.0 filetype==1.2.0
gradio==5.9.1 litestar==2.13.0
gradio-rangeslider==0.0.8 numpy==2.1.3
numpy==2.2.0
onnx==1.17.0 onnx==1.17.0
onnxruntime==1.20.1 onnxruntime==1.20.1
opencv-python==4.10.0.84 opencv-python==4.10.0.84
psutil==6.1.1 psutil==6.1.0
tqdm==4.67.1 tqdm==4.67.1
scipy==1.14.1 scipy==1.14.1

View File

@ -17,8 +17,8 @@ def before_all() -> None:
def test_get_audio_frame() -> None: def test_get_audio_frame() -> None:
assert hasattr(get_audio_frame(get_test_example_file('source.mp3'), 25), '__array_interface__') assert get_audio_frame(get_test_example_file('source.mp3'), 25) is not None
assert hasattr(get_audio_frame(get_test_example_file('source.wav'), 25), '__array_interface__') assert get_audio_frame(get_test_example_file('source.wav'), 25) is not None
assert get_audio_frame('invalid', 25) is None assert get_audio_frame('invalid', 25) is None

View File

@ -1,18 +1,18 @@
from facefusion.download import get_static_download_size, ping_static_url, resolve_download_url_by_provider import pytest
from facefusion.download import conditional_download, get_download_size
from .helper import get_test_examples_directory
def test_get_static_download_size() -> None: @pytest.fixture(scope = 'module', autouse = True)
assert get_static_download_size('https://github.com/facefusion/facefusion-assets/releases/download/models-3.0.0/fairface.onnx') == 85170772 def before_all() -> None:
assert get_static_download_size('https://huggingface.co/facefusion/models-3.0.0/resolve/main/fairface.onnx') == 85170772 conditional_download(get_test_examples_directory(),
assert get_static_download_size('invalid') == 0 [
'https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/target-240p.mp4'
])
def test_static_ping_url() -> None: def test_get_download_size() -> None:
assert ping_static_url('https://github.com') is True assert get_download_size('https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/target-240p.mp4') == 191675
assert ping_static_url('https://huggingface.co') is True assert get_download_size('https://github.com/facefusion/facefusion-assets/releases/download/examples-3.0.0/target-360p.mp4') == 370732
assert ping_static_url('invalid') is False assert get_download_size('invalid') == 0
def test_resolve_download_url_by_provider() -> None:
assert resolve_download_url_by_provider('github', 'models-3.0.0', 'fairface.onnx') == 'https://github.com/facefusion/facefusion-assets/releases/download/models-3.0.0/fairface.onnx'
assert resolve_download_url_by_provider('huggingface', 'models-3.0.0', 'fairface.onnx') == 'https://huggingface.co/facefusion/models-3.0.0/resolve/main/fairface.onnx'

View File

@ -1,4 +1,8 @@
from facefusion.execution import create_inference_execution_providers, get_available_execution_providers, has_execution_provider from facefusion.execution import create_execution_providers, get_execution_provider_set, has_execution_provider
def test_get_execution_provider_set() -> None:
assert 'cpu' in get_execution_provider_set().keys()
def test_has_execution_provider() -> None: def test_has_execution_provider() -> None:
@ -6,11 +10,7 @@ def test_has_execution_provider() -> None:
assert has_execution_provider('openvino') is False assert has_execution_provider('openvino') is False
def test_get_available_execution_providers() -> None: def test_multiple_execution_providers() -> None:
assert 'cpu' in get_available_execution_providers()
def test_create_inference_execution_providers() -> None:
execution_providers =\ execution_providers =\
[ [
('CUDAExecutionProvider', ('CUDAExecutionProvider',
@ -20,4 +20,4 @@ def test_create_inference_execution_providers() -> None:
'CPUExecutionProvider' 'CPUExecutionProvider'
] ]
assert create_inference_execution_providers('1', [ 'cpu', 'cuda' ]) == execution_providers assert create_execution_providers('1', [ 'cpu', 'cuda' ]) == execution_providers

View File

@ -33,30 +33,78 @@ def before_all() -> None:
@pytest.fixture(scope = 'function', autouse = True) @pytest.fixture(scope = 'function', autouse = True)
def before_each() -> None: def before_each() -> None:
state_manager.clear_item('trim_frame_start')
state_manager.clear_item('trim_frame_end')
prepare_test_output_directory() prepare_test_output_directory()
def test_extract_frames() -> None: def test_extract_frames() -> None:
extract_set =\ target_paths =\
[ [
(get_test_example_file('target-240p-25fps.mp4'), 0, 270, 324), get_test_example_file('target-240p-25fps.mp4'),
(get_test_example_file('target-240p-25fps.mp4'), 224, 270, 55), get_test_example_file('target-240p-30fps.mp4'),
(get_test_example_file('target-240p-25fps.mp4'), 124, 224, 120), get_test_example_file('target-240p-60fps.mp4')
(get_test_example_file('target-240p-25fps.mp4'), 0, 100, 120),
(get_test_example_file('target-240p-30fps.mp4'), 0, 324, 324),
(get_test_example_file('target-240p-30fps.mp4'), 224, 324, 100),
(get_test_example_file('target-240p-30fps.mp4'), 124, 224, 100),
(get_test_example_file('target-240p-30fps.mp4'), 0, 100, 100),
(get_test_example_file('target-240p-60fps.mp4'), 0, 648, 324),
(get_test_example_file('target-240p-60fps.mp4'), 224, 648, 212),
(get_test_example_file('target-240p-60fps.mp4'), 124, 224, 50),
(get_test_example_file('target-240p-60fps.mp4'), 0, 100, 50)
] ]
for target_path, trim_frame_start, trim_frame_end, frame_total in extract_set: for target_path in target_paths:
create_temp_directory(target_path) create_temp_directory(target_path)
assert extract_frames(target_path, '452x240', 30.0, trim_frame_start, trim_frame_end) is True assert extract_frames(target_path, '452x240', 30.0) is True
assert len(get_temp_frame_paths(target_path)) == 324
clear_temp_directory(target_path)
def test_extract_frames_with_trim_start() -> None:
state_manager.init_item('trim_frame_start', 224)
target_paths =\
[
(get_test_example_file('target-240p-25fps.mp4'), 55),
(get_test_example_file('target-240p-30fps.mp4'), 100),
(get_test_example_file('target-240p-60fps.mp4'), 212)
]
for target_path, frame_total in target_paths:
create_temp_directory(target_path)
assert extract_frames(target_path, '452x240', 30.0) is True
assert len(get_temp_frame_paths(target_path)) == frame_total
clear_temp_directory(target_path)
def test_extract_frames_with_trim_start_and_trim_end() -> None:
state_manager.init_item('trim_frame_start', 124)
state_manager.init_item('trim_frame_end', 224)
target_paths =\
[
(get_test_example_file('target-240p-25fps.mp4'), 120),
(get_test_example_file('target-240p-30fps.mp4'), 100),
(get_test_example_file('target-240p-60fps.mp4'), 50)
]
for target_path, frame_total in target_paths:
create_temp_directory(target_path)
assert extract_frames(target_path, '452x240', 30.0) is True
assert len(get_temp_frame_paths(target_path)) == frame_total
clear_temp_directory(target_path)
def test_extract_frames_with_trim_end() -> None:
state_manager.init_item('trim_frame_end', 100)
target_paths =\
[
(get_test_example_file('target-240p-25fps.mp4'), 120),
(get_test_example_file('target-240p-30fps.mp4'), 100),
(get_test_example_file('target-240p-60fps.mp4'), 50)
]
for target_path, frame_total in target_paths:
create_temp_directory(target_path)
assert extract_frames(target_path, '426x240', 30.0) is True
assert len(get_temp_frame_paths(target_path)) == frame_total assert len(get_temp_frame_paths(target_path)) == frame_total
clear_temp_directory(target_path) clear_temp_directory(target_path)
@ -91,7 +139,7 @@ def test_restore_audio() -> None:
create_temp_directory(target_path) create_temp_directory(target_path)
copy_file(target_path, get_temp_file_path(target_path)) copy_file(target_path, get_temp_file_path(target_path))
assert restore_audio(target_path, output_path, 30, 0, 270) is True assert restore_audio(target_path, output_path, 30) is True
clear_temp_directory(target_path) clear_temp_directory(target_path)

View File

@ -105,11 +105,8 @@ def test_create_directory() -> None:
def test_list_directory() -> None: def test_list_directory() -> None:
files = list_directory(get_test_examples_directory()) assert list_directory(get_test_examples_directory())
assert list_directory(get_test_example_file('source.jpg')) is None
for file in files:
assert file.get('path') == get_test_example_file(file.get('name') + file.get('extension'))
assert list_directory('invalid') is None assert list_directory('invalid') is None

View File

@ -3,7 +3,7 @@ import subprocess
import pytest import pytest
from facefusion.download import conditional_download from facefusion.download import conditional_download
from facefusion.vision import calc_histogram_difference, count_trim_frame_total, count_video_frame_total, create_image_resolutions, create_video_resolutions, detect_image_resolution, detect_video_duration, detect_video_fps, detect_video_resolution, get_video_frame, match_frame_color, normalize_resolution, pack_resolution, read_image, restrict_image_resolution, restrict_trim_frame, restrict_video_fps, restrict_video_resolution, unpack_resolution from facefusion.vision import calc_histogram_difference, count_video_frame_total, create_image_resolutions, create_video_resolutions, detect_image_resolution, detect_video_duration, detect_video_fps, detect_video_resolution, get_video_frame, match_frame_color, normalize_resolution, pack_resolution, read_image, restrict_image_resolution, restrict_video_fps, restrict_video_resolution, unpack_resolution
from .helper import get_test_example_file, get_test_examples_directory from .helper import get_test_example_file, get_test_examples_directory
@ -50,7 +50,7 @@ def test_create_image_resolutions() -> None:
def test_get_video_frame() -> None: def test_get_video_frame() -> None:
assert hasattr(get_video_frame(get_test_example_file('target-240p-25fps.mp4')), '__array_interface__') assert get_video_frame(get_test_example_file('target-240p-25fps.mp4')) is not None
assert get_video_frame('invalid') is None assert get_video_frame('invalid') is None
@ -79,26 +79,6 @@ def test_detect_video_duration() -> None:
assert detect_video_duration('invalid') == 0 assert detect_video_duration('invalid') == 0
def test_count_trim_frame_total() -> None:
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), 0, 200) == 200
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), 70, 270) == 200
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), -10, None) == 270
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), None, -10) == 0
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), 280, None) == 0
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), None, 280) == 270
assert count_trim_frame_total(get_test_example_file('target-240p.mp4'), None, None) == 270
def test_restrict_trim_frame() -> None:
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), 0, 200) == (0, 200)
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), 70, 270) == (70, 270)
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), -10, None) == (0, 270)
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), None, -10) == (0, 0)
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), 280, None) == (270, 270)
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), None, 280) == (0, 270)
assert restrict_trim_frame(get_test_example_file('target-240p.mp4'), None, None) == (0, 270)
def test_detect_video_resolution() -> None: def test_detect_video_resolution() -> None:
assert detect_video_resolution(get_test_example_file('target-240p.mp4')) == (426, 226) assert detect_video_resolution(get_test_example_file('target-240p.mp4')) == (426, 226)
assert detect_video_resolution(get_test_example_file('target-240p-90deg.mp4')) == (226, 426) assert detect_video_resolution(get_test_example_file('target-240p-90deg.mp4')) == (226, 426)